00:00:00.000 Started by upstream project "spdk-dpdk-per-patch" build number 296 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.061 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.061 The recommended git tool is: git 00:00:00.061 using credential 00000000-0000-0000-0000-000000000002 00:00:00.063 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.089 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.127 Using shallow fetch with depth 1 00:00:00.127 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.127 > git --version # timeout=10 00:00:00.176 > git --version # 'git version 2.39.2' 00:00:00.176 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.207 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.207 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.932 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.943 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.954 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:06.954 > git config core.sparsecheckout # timeout=10 00:00:06.966 > git read-tree -mu HEAD # timeout=10 00:00:06.982 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:07.007 Commit message: "packer: Fix typo in a package name" 00:00:07.008 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:07.130 [Pipeline] Start of Pipeline 00:00:07.143 [Pipeline] library 00:00:07.145 Loading library shm_lib@master 00:00:07.145 Library shm_lib@master is cached. Copying from home. 00:00:07.161 [Pipeline] node 00:00:07.173 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:07.174 [Pipeline] { 00:00:07.184 [Pipeline] catchError 00:00:07.185 [Pipeline] { 00:00:07.197 [Pipeline] wrap 00:00:07.205 [Pipeline] { 00:00:07.212 [Pipeline] stage 00:00:07.214 [Pipeline] { (Prologue) 00:00:07.231 [Pipeline] echo 00:00:07.232 Node: VM-host-WFP7 00:00:07.238 [Pipeline] cleanWs 00:00:07.248 [WS-CLEANUP] Deleting project workspace... 00:00:07.248 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.255 [WS-CLEANUP] done 00:00:07.435 [Pipeline] setCustomBuildProperty 00:00:07.532 [Pipeline] httpRequest 00:00:07.925 [Pipeline] echo 00:00:07.927 Sorcerer 10.211.164.101 is alive 00:00:07.934 [Pipeline] retry 00:00:07.936 [Pipeline] { 00:00:07.947 [Pipeline] httpRequest 00:00:07.952 HttpMethod: GET 00:00:07.952 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:07.953 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:07.968 Response Code: HTTP/1.1 200 OK 00:00:07.968 Success: Status code 200 is in the accepted range: 200,404 00:00:07.969 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:13.815 [Pipeline] } 00:00:13.829 [Pipeline] // retry 00:00:13.835 [Pipeline] sh 00:00:14.118 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:14.135 [Pipeline] httpRequest 00:00:14.505 [Pipeline] echo 00:00:14.507 Sorcerer 10.211.164.101 is alive 00:00:14.517 [Pipeline] retry 00:00:14.520 [Pipeline] { 00:00:14.535 [Pipeline] httpRequest 00:00:14.540 HttpMethod: GET 00:00:14.540 URL: http://10.211.164.101/packages/spdk_1042d663d395fdb56d1a03c64ee259fa9237faa3.tar.gz 00:00:14.541 Sending request to url: http://10.211.164.101/packages/spdk_1042d663d395fdb56d1a03c64ee259fa9237faa3.tar.gz 00:00:14.561 Response Code: HTTP/1.1 200 OK 00:00:14.561 Success: Status code 200 is in the accepted range: 200,404 00:00:14.562 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_1042d663d395fdb56d1a03c64ee259fa9237faa3.tar.gz 00:00:55.262 [Pipeline] } 00:00:55.281 [Pipeline] // retry 00:00:55.290 [Pipeline] sh 00:00:55.573 + tar --no-same-owner -xf spdk_1042d663d395fdb56d1a03c64ee259fa9237faa3.tar.gz 00:00:58.125 [Pipeline] sh 00:00:58.407 + git -C spdk log --oneline -n5 00:00:58.408 1042d663d env_dpdk: align dpdk headers with upstream 00:00:58.408 f417ec25e pkgdep/git: Add patches to ICE driver for changes in >= 6.11 kernels 00:00:58.408 b83903543 pkgdep/git: Add small patch to irdma for >= 6.11 kernels 00:00:58.408 214b0826b nvmf: clear visible_ns flag when no_auto_visible is unset 00:00:58.408 bfd014b57 nvmf: add function for setting ns visibility 00:00:58.421 [Pipeline] sh 00:00:58.703 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/02/25102/2 00:00:59.644 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:59.644 * branch refs/changes/02/25102/2 -> FETCH_HEAD 00:00:59.657 [Pipeline] sh 00:00:59.940 + git -C spdk/dpdk checkout FETCH_HEAD 00:01:00.510 Previous HEAD position was 8d8db71763 eal/alarm_cancel: Fix thread starvation 00:01:00.510 HEAD is now at 39efe3d81c dmadev: fix calloc parameters 00:01:00.526 [Pipeline] writeFile 00:01:00.541 [Pipeline] sh 00:01:00.825 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:00.838 [Pipeline] sh 00:01:01.118 + cat autorun-spdk.conf 00:01:01.118 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.118 SPDK_RUN_ASAN=1 00:01:01.118 SPDK_RUN_UBSAN=1 00:01:01.118 SPDK_TEST_RAID=1 00:01:01.118 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.126 RUN_NIGHTLY= 00:01:01.128 [Pipeline] } 00:01:01.142 [Pipeline] // stage 00:01:01.156 [Pipeline] stage 00:01:01.159 [Pipeline] { (Run VM) 00:01:01.172 [Pipeline] sh 00:01:01.457 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:01.457 + echo 'Start stage prepare_nvme.sh' 00:01:01.457 Start stage prepare_nvme.sh 00:01:01.457 + [[ -n 7 ]] 00:01:01.457 + disk_prefix=ex7 00:01:01.457 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:01.457 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:01.457 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:01.457 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.457 ++ SPDK_RUN_ASAN=1 00:01:01.457 ++ SPDK_RUN_UBSAN=1 00:01:01.457 ++ SPDK_TEST_RAID=1 00:01:01.457 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.457 ++ RUN_NIGHTLY= 00:01:01.457 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:01.457 + nvme_files=() 00:01:01.457 + declare -A nvme_files 00:01:01.457 + backend_dir=/var/lib/libvirt/images/backends 00:01:01.457 + nvme_files['nvme.img']=5G 00:01:01.457 + nvme_files['nvme-cmb.img']=5G 00:01:01.457 + nvme_files['nvme-multi0.img']=4G 00:01:01.457 + nvme_files['nvme-multi1.img']=4G 00:01:01.457 + nvme_files['nvme-multi2.img']=4G 00:01:01.457 + nvme_files['nvme-openstack.img']=8G 00:01:01.457 + nvme_files['nvme-zns.img']=5G 00:01:01.457 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:01.457 + (( SPDK_TEST_FTL == 1 )) 00:01:01.457 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:01.457 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:01.457 + for nvme in "${!nvme_files[@]}" 00:01:01.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:01.457 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.457 + for nvme in "${!nvme_files[@]}" 00:01:01.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:01.457 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.457 + for nvme in "${!nvme_files[@]}" 00:01:01.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:01.458 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:01.458 + for nvme in "${!nvme_files[@]}" 00:01:01.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:01.458 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.458 + for nvme in "${!nvme_files[@]}" 00:01:01.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:01.458 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.458 + for nvme in "${!nvme_files[@]}" 00:01:01.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:01.458 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:01.458 + for nvme in "${!nvme_files[@]}" 00:01:01.458 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:01.717 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.717 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:01.717 + echo 'End stage prepare_nvme.sh' 00:01:01.717 End stage prepare_nvme.sh 00:01:01.729 [Pipeline] sh 00:01:02.070 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:02.070 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:01:02.070 00:01:02.070 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:02.070 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:02.070 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:02.070 HELP=0 00:01:02.070 DRY_RUN=0 00:01:02.070 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:02.070 NVME_DISKS_TYPE=nvme,nvme, 00:01:02.070 NVME_AUTO_CREATE=0 00:01:02.070 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:02.070 NVME_CMB=,, 00:01:02.070 NVME_PMR=,, 00:01:02.070 NVME_ZNS=,, 00:01:02.070 NVME_MS=,, 00:01:02.070 NVME_FDP=,, 00:01:02.070 SPDK_VAGRANT_DISTRO=fedora39 00:01:02.070 SPDK_VAGRANT_VMCPU=10 00:01:02.070 SPDK_VAGRANT_VMRAM=12288 00:01:02.070 SPDK_VAGRANT_PROVIDER=libvirt 00:01:02.070 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:02.070 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:02.070 SPDK_OPENSTACK_NETWORK=0 00:01:02.070 VAGRANT_PACKAGE_BOX=0 00:01:02.070 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:02.070 FORCE_DISTRO=true 00:01:02.070 VAGRANT_BOX_VERSION= 00:01:02.070 EXTRA_VAGRANTFILES= 00:01:02.070 NIC_MODEL=virtio 00:01:02.070 00:01:02.070 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:02.070 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:03.976 Bringing machine 'default' up with 'libvirt' provider... 00:01:04.546 ==> default: Creating image (snapshot of base box volume). 00:01:04.546 ==> default: Creating domain with the following settings... 00:01:04.546 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1729503940_15e416ce79553590ef5d 00:01:04.546 ==> default: -- Domain type: kvm 00:01:04.546 ==> default: -- Cpus: 10 00:01:04.546 ==> default: -- Feature: acpi 00:01:04.546 ==> default: -- Feature: apic 00:01:04.546 ==> default: -- Feature: pae 00:01:04.546 ==> default: -- Memory: 12288M 00:01:04.546 ==> default: -- Memory Backing: hugepages: 00:01:04.546 ==> default: -- Management MAC: 00:01:04.546 ==> default: -- Loader: 00:01:04.546 ==> default: -- Nvram: 00:01:04.546 ==> default: -- Base box: spdk/fedora39 00:01:04.546 ==> default: -- Storage pool: default 00:01:04.546 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1729503940_15e416ce79553590ef5d.img (20G) 00:01:04.546 ==> default: -- Volume Cache: default 00:01:04.546 ==> default: -- Kernel: 00:01:04.546 ==> default: -- Initrd: 00:01:04.546 ==> default: -- Graphics Type: vnc 00:01:04.546 ==> default: -- Graphics Port: -1 00:01:04.546 ==> default: -- Graphics IP: 127.0.0.1 00:01:04.546 ==> default: -- Graphics Password: Not defined 00:01:04.546 ==> default: -- Video Type: cirrus 00:01:04.546 ==> default: -- Video VRAM: 9216 00:01:04.546 ==> default: -- Sound Type: 00:01:04.546 ==> default: -- Keymap: en-us 00:01:04.546 ==> default: -- TPM Path: 00:01:04.546 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:04.546 ==> default: -- Command line args: 00:01:04.546 ==> default: -> value=-device, 00:01:04.546 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:04.546 ==> default: -> value=-drive, 00:01:04.546 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:04.546 ==> default: -> value=-device, 00:01:04.546 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.546 ==> default: -> value=-device, 00:01:04.546 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:04.546 ==> default: -> value=-drive, 00:01:04.546 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:04.546 ==> default: -> value=-device, 00:01:04.546 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.546 ==> default: -> value=-drive, 00:01:04.546 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:04.546 ==> default: -> value=-device, 00:01:04.546 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.546 ==> default: -> value=-drive, 00:01:04.546 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:04.546 ==> default: -> value=-device, 00:01:04.546 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.806 ==> default: Creating shared folders metadata... 00:01:04.806 ==> default: Starting domain. 00:01:06.715 ==> default: Waiting for domain to get an IP address... 00:01:24.815 ==> default: Waiting for SSH to become available... 00:01:24.815 ==> default: Configuring and enabling network interfaces... 00:01:30.091 default: SSH address: 192.168.121.22:22 00:01:30.091 default: SSH username: vagrant 00:01:30.091 default: SSH auth method: private key 00:01:32.655 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:40.778 ==> default: Mounting SSHFS shared folder... 00:01:42.709 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:42.709 ==> default: Checking Mount.. 00:01:44.089 ==> default: Folder Successfully Mounted! 00:01:44.089 ==> default: Running provisioner: file... 00:01:45.030 default: ~/.gitconfig => .gitconfig 00:01:45.599 00:01:45.599 SUCCESS! 00:01:45.599 00:01:45.599 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:45.599 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:45.599 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:45.599 00:01:45.609 [Pipeline] } 00:01:45.621 [Pipeline] // stage 00:01:45.630 [Pipeline] dir 00:01:45.631 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:45.633 [Pipeline] { 00:01:45.646 [Pipeline] catchError 00:01:45.647 [Pipeline] { 00:01:45.660 [Pipeline] sh 00:01:45.943 + vagrant ssh-config --host vagrant 00:01:45.943 + sed -ne /^Host/,$p 00:01:45.943 + tee ssh_conf 00:01:48.475 Host vagrant 00:01:48.475 HostName 192.168.121.22 00:01:48.475 User vagrant 00:01:48.475 Port 22 00:01:48.475 UserKnownHostsFile /dev/null 00:01:48.475 StrictHostKeyChecking no 00:01:48.475 PasswordAuthentication no 00:01:48.475 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:48.475 IdentitiesOnly yes 00:01:48.475 LogLevel FATAL 00:01:48.475 ForwardAgent yes 00:01:48.475 ForwardX11 yes 00:01:48.475 00:01:48.490 [Pipeline] withEnv 00:01:48.492 [Pipeline] { 00:01:48.505 [Pipeline] sh 00:01:48.790 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:48.790 source /etc/os-release 00:01:48.790 [[ -e /image.version ]] && img=$(< /image.version) 00:01:48.790 # Minimal, systemd-like check. 00:01:48.790 if [[ -e /.dockerenv ]]; then 00:01:48.790 # Clear garbage from the node's name: 00:01:48.790 # agt-er_autotest_547-896 -> autotest_547-896 00:01:48.790 # $HOSTNAME is the actual container id 00:01:48.790 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:48.790 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:48.790 # We can assume this is a mount from a host where container is running, 00:01:48.790 # so fetch its hostname to easily identify the target swarm worker. 00:01:48.790 container="$(< /etc/hostname) ($agent)" 00:01:48.790 else 00:01:48.790 # Fallback 00:01:48.790 container=$agent 00:01:48.790 fi 00:01:48.790 fi 00:01:48.790 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:48.790 00:01:49.062 [Pipeline] } 00:01:49.078 [Pipeline] // withEnv 00:01:49.087 [Pipeline] setCustomBuildProperty 00:01:49.100 [Pipeline] stage 00:01:49.102 [Pipeline] { (Tests) 00:01:49.119 [Pipeline] sh 00:01:49.403 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:49.678 [Pipeline] sh 00:01:49.962 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:50.238 [Pipeline] timeout 00:01:50.238 Timeout set to expire in 1 hr 30 min 00:01:50.240 [Pipeline] { 00:01:50.255 [Pipeline] sh 00:01:50.541 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:51.109 HEAD is now at 1042d663d env_dpdk: align dpdk headers with upstream 00:01:51.121 [Pipeline] sh 00:01:51.422 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:51.713 [Pipeline] sh 00:01:51.997 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:52.274 [Pipeline] sh 00:01:52.556 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:52.817 ++ readlink -f spdk_repo 00:01:52.817 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:52.817 + [[ -n /home/vagrant/spdk_repo ]] 00:01:52.817 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:52.817 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:52.817 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:52.817 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:52.817 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:52.817 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:52.817 + cd /home/vagrant/spdk_repo 00:01:52.817 + source /etc/os-release 00:01:52.817 ++ NAME='Fedora Linux' 00:01:52.817 ++ VERSION='39 (Cloud Edition)' 00:01:52.817 ++ ID=fedora 00:01:52.817 ++ VERSION_ID=39 00:01:52.817 ++ VERSION_CODENAME= 00:01:52.817 ++ PLATFORM_ID=platform:f39 00:01:52.817 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:52.817 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.817 ++ LOGO=fedora-logo-icon 00:01:52.817 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:52.817 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.817 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:52.817 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.817 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.817 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.817 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:52.817 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.817 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:52.817 ++ SUPPORT_END=2024-11-12 00:01:52.817 ++ VARIANT='Cloud Edition' 00:01:52.817 ++ VARIANT_ID=cloud 00:01:52.817 + uname -a 00:01:52.817 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:52.817 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:53.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:53.388 Hugepages 00:01:53.388 node hugesize free / total 00:01:53.388 node0 1048576kB 0 / 0 00:01:53.388 node0 2048kB 0 / 0 00:01:53.388 00:01:53.388 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.388 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:53.388 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:53.388 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:53.388 + rm -f /tmp/spdk-ld-path 00:01:53.388 + source autorun-spdk.conf 00:01:53.388 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.388 ++ SPDK_RUN_ASAN=1 00:01:53.388 ++ SPDK_RUN_UBSAN=1 00:01:53.388 ++ SPDK_TEST_RAID=1 00:01:53.388 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.388 ++ RUN_NIGHTLY= 00:01:53.388 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.388 + [[ -n '' ]] 00:01:53.388 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:53.388 + for M in /var/spdk/build-*-manifest.txt 00:01:53.388 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:53.388 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.649 + for M in /var/spdk/build-*-manifest.txt 00:01:53.649 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.649 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.649 + for M in /var/spdk/build-*-manifest.txt 00:01:53.649 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.649 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.649 ++ uname 00:01:53.649 + [[ Linux == \L\i\n\u\x ]] 00:01:53.649 + sudo dmesg -T 00:01:53.649 + sudo dmesg --clear 00:01:53.649 + dmesg_pid=5420 00:01:53.649 + [[ Fedora Linux == FreeBSD ]] 00:01:53.649 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.649 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.649 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.649 + sudo dmesg -Tw 00:01:53.649 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.649 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.649 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.649 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.649 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.649 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.649 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.649 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.649 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.649 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.649 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.649 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:53.649 Test configuration: 00:01:53.649 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.649 SPDK_RUN_ASAN=1 00:01:53.649 SPDK_RUN_UBSAN=1 00:01:53.649 SPDK_TEST_RAID=1 00:01:53.649 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.649 RUN_NIGHTLY= 09:46:30 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:53.649 09:46:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:53.649 09:46:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:53.649 09:46:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.649 09:46:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.649 09:46:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.649 09:46:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.649 09:46:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.649 09:46:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.649 09:46:30 -- paths/export.sh@5 -- $ export PATH 00:01:53.649 09:46:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.649 09:46:30 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:53.649 09:46:30 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:53.649 09:46:30 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729503990.XXXXXX 00:01:53.649 09:46:30 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729503990.oDjt3h 00:01:53.649 09:46:30 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:53.649 09:46:30 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:53.649 09:46:30 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:53.649 09:46:30 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:53.649 09:46:30 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.649 09:46:30 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:53.909 09:46:30 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:53.909 09:46:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.909 09:46:30 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:53.909 09:46:30 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:53.909 09:46:30 -- pm/common@17 -- $ local monitor 00:01:53.909 09:46:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.909 09:46:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.909 09:46:30 -- pm/common@25 -- $ sleep 1 00:01:53.909 09:46:30 -- pm/common@21 -- $ date +%s 00:01:53.909 09:46:30 -- pm/common@21 -- $ date +%s 00:01:53.909 09:46:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729503990 00:01:53.909 09:46:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729503990 00:01:53.909 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729503990_collect-cpu-load.pm.log 00:01:53.909 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729503990_collect-vmstat.pm.log 00:01:54.850 09:46:31 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:54.850 09:46:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.850 09:46:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.850 09:46:31 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:54.850 09:46:31 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.850 Mon Oct 21 09:46:31 AM UTC 2024 00:01:54.850 09:46:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.850 v25.01-pre-77-g1042d663d 00:01:54.850 09:46:31 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:54.850 09:46:31 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:54.850 09:46:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:54.850 09:46:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.850 09:46:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.850 ************************************ 00:01:54.850 START TEST asan 00:01:54.850 ************************************ 00:01:54.850 using asan 00:01:54.850 09:46:31 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:54.850 00:01:54.850 real 0m0.001s 00:01:54.850 user 0m0.001s 00:01:54.850 sys 0m0.000s 00:01:54.850 09:46:31 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:54.850 09:46:31 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.850 ************************************ 00:01:54.850 END TEST asan 00:01:54.850 ************************************ 00:01:54.850 09:46:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.850 09:46:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.850 09:46:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:54.850 09:46:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.850 09:46:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.850 ************************************ 00:01:54.850 START TEST ubsan 00:01:54.850 ************************************ 00:01:54.850 using ubsan 00:01:54.850 09:46:31 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:54.850 00:01:54.850 real 0m0.000s 00:01:54.850 user 0m0.000s 00:01:54.850 sys 0m0.000s 00:01:54.850 09:46:31 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:54.850 09:46:31 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.850 ************************************ 00:01:54.850 END TEST ubsan 00:01:54.850 ************************************ 00:01:55.110 09:46:31 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:55.110 09:46:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:55.110 09:46:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:55.110 09:46:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:55.110 09:46:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:55.110 09:46:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:55.110 09:46:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:55.110 09:46:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:55.110 09:46:31 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:55.110 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:55.110 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:55.679 Using 'verbs' RDMA provider 00:02:11.526 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:26.451 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:26.710 Creating mk/config.mk...done. 00:02:26.710 Creating mk/cc.flags.mk...done. 00:02:26.710 Type 'make' to build. 00:02:26.710 09:47:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:26.710 09:47:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:26.710 09:47:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:26.710 09:47:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.710 ************************************ 00:02:26.710 START TEST make 00:02:26.710 ************************************ 00:02:26.710 09:47:03 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:27.279 make[1]: Nothing to be done for 'all'. 00:02:37.264 The Meson build system 00:02:37.264 Version: 1.5.0 00:02:37.264 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:37.264 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:37.264 Build type: native build 00:02:37.264 Program cat found: YES (/usr/bin/cat) 00:02:37.264 Project name: DPDK 00:02:37.264 Project version: 23.11.0 00:02:37.264 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:37.264 C linker for the host machine: cc ld.bfd 2.40-14 00:02:37.264 Host machine cpu family: x86_64 00:02:37.264 Host machine cpu: x86_64 00:02:37.264 Message: ## Building in Developer Mode ## 00:02:37.264 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.264 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:37.264 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.264 Program python3 found: YES (/usr/bin/python3) 00:02:37.264 Program cat found: YES (/usr/bin/cat) 00:02:37.264 Compiler for C supports arguments -march=native: YES 00:02:37.264 Checking for size of "void *" : 8 00:02:37.264 Checking for size of "void *" : 8 (cached) 00:02:37.264 Library m found: YES 00:02:37.264 Library numa found: YES 00:02:37.264 Has header "numaif.h" : YES 00:02:37.264 Library fdt found: NO 00:02:37.264 Library execinfo found: NO 00:02:37.264 Has header "execinfo.h" : YES 00:02:37.264 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:37.264 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.264 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.264 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.264 Run-time dependency openssl found: YES 3.1.1 00:02:37.264 Run-time dependency libpcap found: YES 1.10.4 00:02:37.264 Has header "pcap.h" with dependency libpcap: YES 00:02:37.264 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.264 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.264 Compiler for C supports arguments -Wformat: YES 00:02:37.264 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.264 Compiler for C supports arguments -Wformat-security: NO 00:02:37.264 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.264 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.264 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.264 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.264 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.264 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.264 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.264 Compiler for C supports arguments -Wundef: YES 00:02:37.264 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.264 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.264 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.264 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.264 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.264 Program objdump found: YES (/usr/bin/objdump) 00:02:37.264 Compiler for C supports arguments -mavx512f: YES 00:02:37.264 Checking if "AVX512 checking" compiles: YES 00:02:37.264 Fetching value of define "__SSE4_2__" : 1 00:02:37.264 Fetching value of define "__AES__" : 1 00:02:37.264 Fetching value of define "__AVX__" : 1 00:02:37.264 Fetching value of define "__AVX2__" : 1 00:02:37.264 Fetching value of define "__AVX512BW__" : 1 00:02:37.264 Fetching value of define "__AVX512CD__" : 1 00:02:37.264 Fetching value of define "__AVX512DQ__" : 1 00:02:37.264 Fetching value of define "__AVX512F__" : 1 00:02:37.264 Fetching value of define "__AVX512VL__" : 1 00:02:37.264 Fetching value of define "__PCLMUL__" : 1 00:02:37.264 Fetching value of define "__RDRND__" : 1 00:02:37.264 Fetching value of define "__RDSEED__" : 1 00:02:37.264 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.264 Fetching value of define "__znver1__" : (undefined) 00:02:37.264 Fetching value of define "__znver2__" : (undefined) 00:02:37.264 Fetching value of define "__znver3__" : (undefined) 00:02:37.264 Fetching value of define "__znver4__" : (undefined) 00:02:37.264 Library asan found: YES 00:02:37.264 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.264 Message: lib/log: Defining dependency "log" 00:02:37.264 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.264 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.264 Library rt found: YES 00:02:37.264 Checking for function "getentropy" : NO 00:02:37.264 Message: lib/eal: Defining dependency "eal" 00:02:37.264 Message: lib/ring: Defining dependency "ring" 00:02:37.264 Message: lib/rcu: Defining dependency "rcu" 00:02:37.264 Message: lib/mempool: Defining dependency "mempool" 00:02:37.264 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.264 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.264 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.264 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.264 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.264 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.264 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:37.264 Compiler for C supports arguments -mpclmul: YES 00:02:37.264 Compiler for C supports arguments -maes: YES 00:02:37.264 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.264 Compiler for C supports arguments -mavx512bw: YES 00:02:37.264 Compiler for C supports arguments -mavx512dq: YES 00:02:37.264 Compiler for C supports arguments -mavx512vl: YES 00:02:37.264 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.265 Compiler for C supports arguments -mavx2: YES 00:02:37.265 Compiler for C supports arguments -mavx: YES 00:02:37.265 Message: lib/net: Defining dependency "net" 00:02:37.265 Message: lib/meter: Defining dependency "meter" 00:02:37.265 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.265 Message: lib/pci: Defining dependency "pci" 00:02:37.265 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.265 Message: lib/hash: Defining dependency "hash" 00:02:37.265 Message: lib/timer: Defining dependency "timer" 00:02:37.265 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.265 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.265 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.265 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.265 Message: lib/power: Defining dependency "power" 00:02:37.265 Message: lib/reorder: Defining dependency "reorder" 00:02:37.265 Message: lib/security: Defining dependency "security" 00:02:37.265 Has header "linux/userfaultfd.h" : YES 00:02:37.265 Has header "linux/vduse.h" : YES 00:02:37.265 Message: lib/vhost: Defining dependency "vhost" 00:02:37.265 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.265 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.265 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.265 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:37.265 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:37.265 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:37.265 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:37.265 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:37.265 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:37.265 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:37.265 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:37.265 Configuring doxy-api-html.conf using configuration 00:02:37.265 Configuring doxy-api-man.conf using configuration 00:02:37.265 Program mandb found: YES (/usr/bin/mandb) 00:02:37.265 Program sphinx-build found: NO 00:02:37.265 Configuring rte_build_config.h using configuration 00:02:37.265 Message: 00:02:37.265 ================= 00:02:37.265 Applications Enabled 00:02:37.265 ================= 00:02:37.265 00:02:37.265 apps: 00:02:37.265 00:02:37.265 00:02:37.265 Message: 00:02:37.265 ================= 00:02:37.265 Libraries Enabled 00:02:37.265 ================= 00:02:37.265 00:02:37.265 libs: 00:02:37.265 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:37.265 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:37.265 cryptodev, dmadev, power, reorder, security, vhost, 00:02:37.265 00:02:37.265 Message: 00:02:37.265 =============== 00:02:37.265 Drivers Enabled 00:02:37.265 =============== 00:02:37.265 00:02:37.265 common: 00:02:37.265 00:02:37.265 bus: 00:02:37.265 pci, vdev, 00:02:37.265 mempool: 00:02:37.265 ring, 00:02:37.265 dma: 00:02:37.265 00:02:37.265 net: 00:02:37.265 00:02:37.265 crypto: 00:02:37.265 00:02:37.265 compress: 00:02:37.265 00:02:37.265 vdpa: 00:02:37.265 00:02:37.265 00:02:37.265 Message: 00:02:37.265 ================= 00:02:37.265 Content Skipped 00:02:37.265 ================= 00:02:37.265 00:02:37.265 apps: 00:02:37.265 dumpcap: explicitly disabled via build config 00:02:37.265 graph: explicitly disabled via build config 00:02:37.265 pdump: explicitly disabled via build config 00:02:37.265 proc-info: explicitly disabled via build config 00:02:37.265 test-acl: explicitly disabled via build config 00:02:37.265 test-bbdev: explicitly disabled via build config 00:02:37.265 test-cmdline: explicitly disabled via build config 00:02:37.265 test-compress-perf: explicitly disabled via build config 00:02:37.265 test-crypto-perf: explicitly disabled via build config 00:02:37.265 test-dma-perf: explicitly disabled via build config 00:02:37.265 test-eventdev: explicitly disabled via build config 00:02:37.265 test-fib: explicitly disabled via build config 00:02:37.265 test-flow-perf: explicitly disabled via build config 00:02:37.265 test-gpudev: explicitly disabled via build config 00:02:37.265 test-mldev: explicitly disabled via build config 00:02:37.265 test-pipeline: explicitly disabled via build config 00:02:37.265 test-pmd: explicitly disabled via build config 00:02:37.265 test-regex: explicitly disabled via build config 00:02:37.265 test-sad: explicitly disabled via build config 00:02:37.265 test-security-perf: explicitly disabled via build config 00:02:37.265 00:02:37.265 libs: 00:02:37.265 metrics: explicitly disabled via build config 00:02:37.265 acl: explicitly disabled via build config 00:02:37.265 bbdev: explicitly disabled via build config 00:02:37.265 bitratestats: explicitly disabled via build config 00:02:37.265 bpf: explicitly disabled via build config 00:02:37.265 cfgfile: explicitly disabled via build config 00:02:37.265 distributor: explicitly disabled via build config 00:02:37.265 efd: explicitly disabled via build config 00:02:37.265 eventdev: explicitly disabled via build config 00:02:37.265 dispatcher: explicitly disabled via build config 00:02:37.265 gpudev: explicitly disabled via build config 00:02:37.265 gro: explicitly disabled via build config 00:02:37.265 gso: explicitly disabled via build config 00:02:37.265 ip_frag: explicitly disabled via build config 00:02:37.265 jobstats: explicitly disabled via build config 00:02:37.265 latencystats: explicitly disabled via build config 00:02:37.265 lpm: explicitly disabled via build config 00:02:37.265 member: explicitly disabled via build config 00:02:37.265 pcapng: explicitly disabled via build config 00:02:37.265 rawdev: explicitly disabled via build config 00:02:37.265 regexdev: explicitly disabled via build config 00:02:37.265 mldev: explicitly disabled via build config 00:02:37.265 rib: explicitly disabled via build config 00:02:37.265 sched: explicitly disabled via build config 00:02:37.265 stack: explicitly disabled via build config 00:02:37.265 ipsec: explicitly disabled via build config 00:02:37.265 pdcp: explicitly disabled via build config 00:02:37.265 fib: explicitly disabled via build config 00:02:37.265 port: explicitly disabled via build config 00:02:37.265 pdump: explicitly disabled via build config 00:02:37.265 table: explicitly disabled via build config 00:02:37.265 pipeline: explicitly disabled via build config 00:02:37.265 graph: explicitly disabled via build config 00:02:37.265 node: explicitly disabled via build config 00:02:37.265 00:02:37.265 drivers: 00:02:37.265 common/cpt: not in enabled drivers build config 00:02:37.265 common/dpaax: not in enabled drivers build config 00:02:37.265 common/iavf: not in enabled drivers build config 00:02:37.265 common/idpf: not in enabled drivers build config 00:02:37.265 common/mvep: not in enabled drivers build config 00:02:37.265 common/octeontx: not in enabled drivers build config 00:02:37.265 bus/auxiliary: not in enabled drivers build config 00:02:37.265 bus/cdx: not in enabled drivers build config 00:02:37.265 bus/dpaa: not in enabled drivers build config 00:02:37.265 bus/fslmc: not in enabled drivers build config 00:02:37.265 bus/ifpga: not in enabled drivers build config 00:02:37.265 bus/platform: not in enabled drivers build config 00:02:37.265 bus/vmbus: not in enabled drivers build config 00:02:37.265 common/cnxk: not in enabled drivers build config 00:02:37.265 common/mlx5: not in enabled drivers build config 00:02:37.265 common/nfp: not in enabled drivers build config 00:02:37.265 common/qat: not in enabled drivers build config 00:02:37.265 common/sfc_efx: not in enabled drivers build config 00:02:37.265 mempool/bucket: not in enabled drivers build config 00:02:37.265 mempool/cnxk: not in enabled drivers build config 00:02:37.265 mempool/dpaa: not in enabled drivers build config 00:02:37.265 mempool/dpaa2: not in enabled drivers build config 00:02:37.265 mempool/octeontx: not in enabled drivers build config 00:02:37.265 mempool/stack: not in enabled drivers build config 00:02:37.265 dma/cnxk: not in enabled drivers build config 00:02:37.265 dma/dpaa: not in enabled drivers build config 00:02:37.265 dma/dpaa2: not in enabled drivers build config 00:02:37.265 dma/hisilicon: not in enabled drivers build config 00:02:37.265 dma/idxd: not in enabled drivers build config 00:02:37.265 dma/ioat: not in enabled drivers build config 00:02:37.265 dma/skeleton: not in enabled drivers build config 00:02:37.265 net/af_packet: not in enabled drivers build config 00:02:37.265 net/af_xdp: not in enabled drivers build config 00:02:37.265 net/ark: not in enabled drivers build config 00:02:37.265 net/atlantic: not in enabled drivers build config 00:02:37.265 net/avp: not in enabled drivers build config 00:02:37.265 net/axgbe: not in enabled drivers build config 00:02:37.265 net/bnx2x: not in enabled drivers build config 00:02:37.265 net/bnxt: not in enabled drivers build config 00:02:37.265 net/bonding: not in enabled drivers build config 00:02:37.265 net/cnxk: not in enabled drivers build config 00:02:37.265 net/cpfl: not in enabled drivers build config 00:02:37.265 net/cxgbe: not in enabled drivers build config 00:02:37.265 net/dpaa: not in enabled drivers build config 00:02:37.265 net/dpaa2: not in enabled drivers build config 00:02:37.265 net/e1000: not in enabled drivers build config 00:02:37.265 net/ena: not in enabled drivers build config 00:02:37.265 net/enetc: not in enabled drivers build config 00:02:37.265 net/enetfec: not in enabled drivers build config 00:02:37.265 net/enic: not in enabled drivers build config 00:02:37.265 net/failsafe: not in enabled drivers build config 00:02:37.265 net/fm10k: not in enabled drivers build config 00:02:37.265 net/gve: not in enabled drivers build config 00:02:37.265 net/hinic: not in enabled drivers build config 00:02:37.265 net/hns3: not in enabled drivers build config 00:02:37.265 net/i40e: not in enabled drivers build config 00:02:37.265 net/iavf: not in enabled drivers build config 00:02:37.265 net/ice: not in enabled drivers build config 00:02:37.265 net/idpf: not in enabled drivers build config 00:02:37.265 net/igc: not in enabled drivers build config 00:02:37.265 net/ionic: not in enabled drivers build config 00:02:37.266 net/ipn3ke: not in enabled drivers build config 00:02:37.266 net/ixgbe: not in enabled drivers build config 00:02:37.266 net/mana: not in enabled drivers build config 00:02:37.266 net/memif: not in enabled drivers build config 00:02:37.266 net/mlx4: not in enabled drivers build config 00:02:37.266 net/mlx5: not in enabled drivers build config 00:02:37.266 net/mvneta: not in enabled drivers build config 00:02:37.266 net/mvpp2: not in enabled drivers build config 00:02:37.266 net/netvsc: not in enabled drivers build config 00:02:37.266 net/nfb: not in enabled drivers build config 00:02:37.266 net/nfp: not in enabled drivers build config 00:02:37.266 net/ngbe: not in enabled drivers build config 00:02:37.266 net/null: not in enabled drivers build config 00:02:37.266 net/octeontx: not in enabled drivers build config 00:02:37.266 net/octeon_ep: not in enabled drivers build config 00:02:37.266 net/pcap: not in enabled drivers build config 00:02:37.266 net/pfe: not in enabled drivers build config 00:02:37.266 net/qede: not in enabled drivers build config 00:02:37.266 net/ring: not in enabled drivers build config 00:02:37.266 net/sfc: not in enabled drivers build config 00:02:37.266 net/softnic: not in enabled drivers build config 00:02:37.266 net/tap: not in enabled drivers build config 00:02:37.266 net/thunderx: not in enabled drivers build config 00:02:37.266 net/txgbe: not in enabled drivers build config 00:02:37.266 net/vdev_netvsc: not in enabled drivers build config 00:02:37.266 net/vhost: not in enabled drivers build config 00:02:37.266 net/virtio: not in enabled drivers build config 00:02:37.266 net/vmxnet3: not in enabled drivers build config 00:02:37.266 raw/*: missing internal dependency, "rawdev" 00:02:37.266 crypto/armv8: not in enabled drivers build config 00:02:37.266 crypto/bcmfs: not in enabled drivers build config 00:02:37.266 crypto/caam_jr: not in enabled drivers build config 00:02:37.266 crypto/ccp: not in enabled drivers build config 00:02:37.266 crypto/cnxk: not in enabled drivers build config 00:02:37.266 crypto/dpaa_sec: not in enabled drivers build config 00:02:37.266 crypto/dpaa2_sec: not in enabled drivers build config 00:02:37.266 crypto/ipsec_mb: not in enabled drivers build config 00:02:37.266 crypto/mlx5: not in enabled drivers build config 00:02:37.266 crypto/mvsam: not in enabled drivers build config 00:02:37.266 crypto/nitrox: not in enabled drivers build config 00:02:37.266 crypto/null: not in enabled drivers build config 00:02:37.266 crypto/octeontx: not in enabled drivers build config 00:02:37.266 crypto/openssl: not in enabled drivers build config 00:02:37.266 crypto/scheduler: not in enabled drivers build config 00:02:37.266 crypto/uadk: not in enabled drivers build config 00:02:37.266 crypto/virtio: not in enabled drivers build config 00:02:37.266 compress/isal: not in enabled drivers build config 00:02:37.266 compress/mlx5: not in enabled drivers build config 00:02:37.266 compress/octeontx: not in enabled drivers build config 00:02:37.266 compress/zlib: not in enabled drivers build config 00:02:37.266 regex/*: missing internal dependency, "regexdev" 00:02:37.266 ml/*: missing internal dependency, "mldev" 00:02:37.266 vdpa/ifc: not in enabled drivers build config 00:02:37.266 vdpa/mlx5: not in enabled drivers build config 00:02:37.266 vdpa/nfp: not in enabled drivers build config 00:02:37.266 vdpa/sfc: not in enabled drivers build config 00:02:37.266 event/*: missing internal dependency, "eventdev" 00:02:37.266 baseband/*: missing internal dependency, "bbdev" 00:02:37.266 gpu/*: missing internal dependency, "gpudev" 00:02:37.266 00:02:37.266 00:02:37.266 Build targets in project: 85 00:02:37.266 00:02:37.266 DPDK 23.11.0 00:02:37.266 00:02:37.266 User defined options 00:02:37.266 buildtype : debug 00:02:37.266 default_library : shared 00:02:37.266 libdir : lib 00:02:37.266 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:37.266 b_sanitize : address 00:02:37.266 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:37.266 c_link_args : 00:02:37.266 cpu_instruction_set: native 00:02:37.266 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:37.266 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:37.266 enable_docs : false 00:02:37.266 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:37.266 enable_kmods : false 00:02:37.266 max_lcores : 128 00:02:37.266 tests : false 00:02:37.266 00:02:37.266 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.834 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:37.834 [1/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.834 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:37.834 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.834 [4/265] Linking static target lib/librte_kvargs.a 00:02:37.834 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.834 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.834 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.834 [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:37.834 [9/265] Linking static target lib/librte_log.a 00:02:38.093 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:38.352 [11/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:38.352 [12/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.352 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:38.352 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:38.352 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:38.352 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:38.352 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:38.611 [18/265] Linking static target lib/librte_telemetry.a 00:02:38.611 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.611 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.870 [21/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.870 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.870 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:38.870 [24/265] Linking target lib/librte_log.so.24.0 00:02:38.870 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.870 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.870 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.129 [28/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:39.129 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:39.129 [30/265] Linking target lib/librte_kvargs.so.24.0 00:02:39.129 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.129 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:39.129 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.129 [34/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.387 [35/265] Linking target lib/librte_telemetry.so.24.0 00:02:39.387 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:39.387 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.387 [38/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:39.387 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:39.387 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:39.387 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:39.387 [42/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:39.387 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.646 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:39.647 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:39.906 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:39.906 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:39.906 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:39.906 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:39.906 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:39.906 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:39.906 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.166 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.166 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.166 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:40.166 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.166 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.425 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.425 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:40.425 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.425 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.425 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.425 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.425 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.685 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:40.685 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.685 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:40.685 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:40.944 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:40.944 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:40.944 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.944 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:40.944 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:40.944 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:40.944 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:41.204 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:41.204 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:41.204 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:41.204 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:41.464 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:41.464 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:41.464 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:41.464 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:41.724 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:41.724 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:41.724 [86/265] Linking static target lib/librte_ring.a 00:02:41.724 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:41.724 [88/265] Linking static target lib/librte_eal.a 00:02:41.724 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:41.985 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:41.985 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:41.985 [92/265] Linking static target lib/librte_rcu.a 00:02:41.985 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:41.985 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.985 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:41.985 [96/265] Linking static target lib/librte_mempool.a 00:02:42.246 [97/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.246 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:42.246 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:42.246 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:42.512 [101/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:42.512 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:42.512 [103/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.512 [104/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:42.512 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:42.512 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:42.512 [107/265] Linking static target lib/librte_net.a 00:02:42.771 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:42.771 [109/265] Linking static target lib/librte_meter.a 00:02:42.771 [110/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:43.031 [111/265] Linking static target lib/librte_mbuf.a 00:02:43.031 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.031 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:43.031 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.031 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.031 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.031 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.031 [118/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.291 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:43.550 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:43.812 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:43.812 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:43.812 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:43.812 [124/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.812 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:43.812 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:43.812 [127/265] Linking static target lib/librte_pci.a 00:02:43.812 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:43.812 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.106 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.106 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.106 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.106 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.106 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:44.106 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:44.106 [136/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.390 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.390 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.390 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:44.390 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.390 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:44.390 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:44.390 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.390 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:44.390 [145/265] Linking static target lib/librte_cmdline.a 00:02:44.651 [146/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.651 [147/265] Linking static target lib/librte_timer.a 00:02:44.911 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.911 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.911 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:44.911 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.170 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.170 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:45.430 [154/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.430 [155/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.430 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.430 [157/265] Linking static target lib/librte_hash.a 00:02:45.430 [158/265] Linking static target lib/librte_ethdev.a 00:02:45.430 [159/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:45.430 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:45.688 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:45.688 [162/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:45.688 [163/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:45.688 [164/265] Linking static target lib/librte_compressdev.a 00:02:45.688 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:45.688 [166/265] Linking static target lib/librte_dmadev.a 00:02:45.946 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:45.946 [168/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.946 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:45.946 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:45.946 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:46.205 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:46.205 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.205 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.465 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:46.465 [176/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:46.465 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:46.465 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:46.465 [179/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:46.465 [180/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.465 [181/265] Linking static target lib/librte_cryptodev.a 00:02:46.724 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:46.724 [183/265] Linking static target lib/librte_power.a 00:02:46.985 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:46.985 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:46.985 [186/265] Linking static target lib/librte_reorder.a 00:02:46.985 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:46.985 [188/265] Linking static target lib/librte_security.a 00:02:46.985 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.244 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:47.503 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.503 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:47.762 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.762 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.762 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:47.762 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:48.021 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:48.021 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.281 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:48.281 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:48.281 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:48.281 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:48.541 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:48.541 [204/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:48.541 [205/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:48.541 [206/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.800 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:48.800 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:48.800 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:48.800 [210/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.800 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.800 [212/265] Linking static target drivers/librte_bus_vdev.a 00:02:48.800 [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:48.800 [214/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:49.060 [215/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:49.060 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.060 [217/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.060 [218/265] Linking static target drivers/librte_bus_pci.a 00:02:49.060 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.060 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:49.060 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.060 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.060 [223/265] Linking static target drivers/librte_mempool_ring.a 00:02:49.319 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.701 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:51.639 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.639 [227/265] Linking target lib/librte_eal.so.24.0 00:02:51.899 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:51.899 [229/265] Linking target lib/librte_timer.so.24.0 00:02:51.899 [230/265] Linking target lib/librte_pci.so.24.0 00:02:51.899 [231/265] Linking target lib/librte_ring.so.24.0 00:02:51.899 [232/265] Linking target lib/librte_meter.so.24.0 00:02:51.899 [233/265] Linking target lib/librte_dmadev.so.24.0 00:02:51.899 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:51.899 [235/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:51.899 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:51.899 [237/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:51.899 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:51.899 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:51.899 [240/265] Linking target lib/librte_rcu.so.24.0 00:02:52.159 [241/265] Linking target lib/librte_mempool.so.24.0 00:02:52.159 [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:52.159 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:52.159 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:52.159 [245/265] Linking target lib/librte_mbuf.so.24.0 00:02:52.159 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:52.417 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:52.417 [248/265] Linking target lib/librte_compressdev.so.24.0 00:02:52.417 [249/265] Linking target lib/librte_net.so.24.0 00:02:52.417 [250/265] Linking target lib/librte_reorder.so.24.0 00:02:52.417 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:02:52.417 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:52.417 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:52.417 [254/265] Linking target lib/librte_cmdline.so.24.0 00:02:52.676 [255/265] Linking target lib/librte_hash.so.24.0 00:02:52.676 [256/265] Linking target lib/librte_security.so.24.0 00:02:52.676 [257/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:52.934 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.934 [259/265] Linking target lib/librte_ethdev.so.24.0 00:02:53.193 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:53.193 [261/265] Linking target lib/librte_power.so.24.0 00:02:53.762 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:54.022 [263/265] Linking static target lib/librte_vhost.a 00:02:56.557 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.557 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:56.557 INFO: autodetecting backend as ninja 00:02:56.557 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:18.511 CC lib/log/log.o 00:03:18.511 CC lib/log/log_deprecated.o 00:03:18.511 CC lib/log/log_flags.o 00:03:18.511 CC lib/ut/ut.o 00:03:18.511 CC lib/ut_mock/mock.o 00:03:18.511 LIB libspdk_ut_mock.a 00:03:18.511 LIB libspdk_ut.a 00:03:18.511 LIB libspdk_log.a 00:03:18.511 SO libspdk_ut_mock.so.6.0 00:03:18.511 SO libspdk_ut.so.2.0 00:03:18.511 SO libspdk_log.so.7.1 00:03:18.511 SYMLINK libspdk_ut_mock.so 00:03:18.511 SYMLINK libspdk_ut.so 00:03:18.511 SYMLINK libspdk_log.so 00:03:18.511 CC lib/dma/dma.o 00:03:18.511 CC lib/ioat/ioat.o 00:03:18.511 CXX lib/trace_parser/trace.o 00:03:18.511 CC lib/util/cpuset.o 00:03:18.511 CC lib/util/base64.o 00:03:18.511 CC lib/util/crc16.o 00:03:18.511 CC lib/util/crc32.o 00:03:18.511 CC lib/util/bit_array.o 00:03:18.511 CC lib/util/crc32c.o 00:03:18.511 CC lib/vfio_user/host/vfio_user_pci.o 00:03:18.511 CC lib/util/crc32_ieee.o 00:03:18.511 CC lib/vfio_user/host/vfio_user.o 00:03:18.511 CC lib/util/crc64.o 00:03:18.511 LIB libspdk_dma.a 00:03:18.511 CC lib/util/dif.o 00:03:18.511 CC lib/util/fd.o 00:03:18.511 CC lib/util/fd_group.o 00:03:18.511 SO libspdk_dma.so.5.0 00:03:18.511 CC lib/util/file.o 00:03:18.511 CC lib/util/hexlify.o 00:03:18.511 SYMLINK libspdk_dma.so 00:03:18.511 CC lib/util/iov.o 00:03:18.511 LIB libspdk_ioat.a 00:03:18.511 SO libspdk_ioat.so.7.0 00:03:18.511 CC lib/util/math.o 00:03:18.511 CC lib/util/net.o 00:03:18.511 SYMLINK libspdk_ioat.so 00:03:18.511 LIB libspdk_vfio_user.a 00:03:18.511 CC lib/util/pipe.o 00:03:18.511 CC lib/util/strerror_tls.o 00:03:18.511 SO libspdk_vfio_user.so.5.0 00:03:18.511 CC lib/util/string.o 00:03:18.511 CC lib/util/uuid.o 00:03:18.511 SYMLINK libspdk_vfio_user.so 00:03:18.511 CC lib/util/xor.o 00:03:18.511 CC lib/util/zipf.o 00:03:18.511 CC lib/util/md5.o 00:03:18.511 LIB libspdk_util.a 00:03:18.511 SO libspdk_util.so.10.0 00:03:18.511 LIB libspdk_trace_parser.a 00:03:18.511 SO libspdk_trace_parser.so.6.0 00:03:18.511 SYMLINK libspdk_util.so 00:03:18.511 SYMLINK libspdk_trace_parser.so 00:03:18.511 CC lib/vmd/vmd.o 00:03:18.511 CC lib/vmd/led.o 00:03:18.511 CC lib/idxd/idxd.o 00:03:18.511 CC lib/env_dpdk/env.o 00:03:18.511 CC lib/env_dpdk/memory.o 00:03:18.511 CC lib/idxd/idxd_user.o 00:03:18.511 CC lib/conf/conf.o 00:03:18.511 CC lib/json/json_parse.o 00:03:18.511 CC lib/rdma_provider/common.o 00:03:18.512 CC lib/rdma_utils/rdma_utils.o 00:03:18.512 CC lib/json/json_util.o 00:03:18.512 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:18.512 CC lib/json/json_write.o 00:03:18.512 LIB libspdk_conf.a 00:03:18.512 CC lib/idxd/idxd_kernel.o 00:03:18.512 SO libspdk_conf.so.6.0 00:03:18.512 LIB libspdk_rdma_utils.a 00:03:18.512 SO libspdk_rdma_utils.so.1.0 00:03:18.512 SYMLINK libspdk_conf.so 00:03:18.512 CC lib/env_dpdk/pci.o 00:03:18.512 CC lib/env_dpdk/init.o 00:03:18.512 SYMLINK libspdk_rdma_utils.so 00:03:18.512 CC lib/env_dpdk/threads.o 00:03:18.512 LIB libspdk_rdma_provider.a 00:03:18.768 SO libspdk_rdma_provider.so.6.0 00:03:18.768 CC lib/env_dpdk/pci_ioat.o 00:03:18.768 SYMLINK libspdk_rdma_provider.so 00:03:18.768 CC lib/env_dpdk/pci_virtio.o 00:03:18.768 CC lib/env_dpdk/pci_vmd.o 00:03:18.768 LIB libspdk_json.a 00:03:18.768 SO libspdk_json.so.6.0 00:03:18.768 CC lib/env_dpdk/pci_idxd.o 00:03:18.768 CC lib/env_dpdk/pci_event.o 00:03:18.768 SYMLINK libspdk_json.so 00:03:18.768 CC lib/env_dpdk/sigbus_handler.o 00:03:19.025 CC lib/env_dpdk/pci_dpdk.o 00:03:19.025 LIB libspdk_idxd.a 00:03:19.025 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:19.025 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:19.025 SO libspdk_idxd.so.12.1 00:03:19.025 LIB libspdk_vmd.a 00:03:19.025 SO libspdk_vmd.so.6.0 00:03:19.025 SYMLINK libspdk_idxd.so 00:03:19.025 SYMLINK libspdk_vmd.so 00:03:19.283 CC lib/jsonrpc/jsonrpc_server.o 00:03:19.283 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:19.283 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:19.283 CC lib/jsonrpc/jsonrpc_client.o 00:03:19.541 LIB libspdk_jsonrpc.a 00:03:19.541 SO libspdk_jsonrpc.so.6.0 00:03:19.800 SYMLINK libspdk_jsonrpc.so 00:03:20.059 LIB libspdk_env_dpdk.a 00:03:20.059 CC lib/rpc/rpc.o 00:03:20.059 SO libspdk_env_dpdk.so.15.0 00:03:20.319 LIB libspdk_rpc.a 00:03:20.319 SYMLINK libspdk_env_dpdk.so 00:03:20.319 SO libspdk_rpc.so.6.0 00:03:20.319 SYMLINK libspdk_rpc.so 00:03:20.887 CC lib/trace/trace.o 00:03:20.887 CC lib/trace/trace_rpc.o 00:03:20.887 CC lib/trace/trace_flags.o 00:03:20.887 CC lib/notify/notify.o 00:03:20.887 CC lib/notify/notify_rpc.o 00:03:20.887 CC lib/keyring/keyring.o 00:03:20.887 CC lib/keyring/keyring_rpc.o 00:03:20.887 LIB libspdk_notify.a 00:03:20.887 SO libspdk_notify.so.6.0 00:03:21.145 LIB libspdk_keyring.a 00:03:21.145 SYMLINK libspdk_notify.so 00:03:21.146 LIB libspdk_trace.a 00:03:21.146 SO libspdk_keyring.so.2.0 00:03:21.146 SO libspdk_trace.so.11.0 00:03:21.146 SYMLINK libspdk_keyring.so 00:03:21.146 SYMLINK libspdk_trace.so 00:03:21.713 CC lib/thread/iobuf.o 00:03:21.713 CC lib/thread/thread.o 00:03:21.713 CC lib/sock/sock.o 00:03:21.713 CC lib/sock/sock_rpc.o 00:03:21.972 LIB libspdk_sock.a 00:03:22.231 SO libspdk_sock.so.10.0 00:03:22.232 SYMLINK libspdk_sock.so 00:03:22.491 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:22.491 CC lib/nvme/nvme_ctrlr.o 00:03:22.491 CC lib/nvme/nvme_fabric.o 00:03:22.491 CC lib/nvme/nvme_ns_cmd.o 00:03:22.491 CC lib/nvme/nvme_pcie.o 00:03:22.491 CC lib/nvme/nvme_ns.o 00:03:22.491 CC lib/nvme/nvme_pcie_common.o 00:03:22.491 CC lib/nvme/nvme.o 00:03:22.491 CC lib/nvme/nvme_qpair.o 00:03:23.429 CC lib/nvme/nvme_quirks.o 00:03:23.429 CC lib/nvme/nvme_transport.o 00:03:23.429 LIB libspdk_thread.a 00:03:23.429 CC lib/nvme/nvme_discovery.o 00:03:23.429 SO libspdk_thread.so.10.2 00:03:23.429 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:23.429 SYMLINK libspdk_thread.so 00:03:23.429 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:23.429 CC lib/nvme/nvme_tcp.o 00:03:23.429 CC lib/nvme/nvme_opal.o 00:03:23.688 CC lib/nvme/nvme_io_msg.o 00:03:23.688 CC lib/nvme/nvme_poll_group.o 00:03:23.948 CC lib/nvme/nvme_zns.o 00:03:23.948 CC lib/nvme/nvme_stubs.o 00:03:24.207 CC lib/nvme/nvme_auth.o 00:03:24.207 CC lib/nvme/nvme_cuse.o 00:03:24.207 CC lib/nvme/nvme_rdma.o 00:03:24.207 CC lib/accel/accel.o 00:03:24.466 CC lib/blob/blobstore.o 00:03:24.466 CC lib/blob/request.o 00:03:24.726 CC lib/init/json_config.o 00:03:24.726 CC lib/virtio/virtio.o 00:03:24.726 CC lib/blob/zeroes.o 00:03:24.984 CC lib/init/subsystem.o 00:03:24.984 CC lib/virtio/virtio_vhost_user.o 00:03:24.984 CC lib/blob/blob_bs_dev.o 00:03:24.984 CC lib/init/subsystem_rpc.o 00:03:24.984 CC lib/virtio/virtio_vfio_user.o 00:03:25.242 CC lib/fsdev/fsdev.o 00:03:25.242 CC lib/accel/accel_rpc.o 00:03:25.242 CC lib/init/rpc.o 00:03:25.242 CC lib/virtio/virtio_pci.o 00:03:25.242 CC lib/accel/accel_sw.o 00:03:25.242 CC lib/fsdev/fsdev_io.o 00:03:25.242 CC lib/fsdev/fsdev_rpc.o 00:03:25.501 LIB libspdk_init.a 00:03:25.501 SO libspdk_init.so.6.0 00:03:25.501 SYMLINK libspdk_init.so 00:03:25.501 LIB libspdk_virtio.a 00:03:25.501 LIB libspdk_accel.a 00:03:25.501 SO libspdk_virtio.so.7.0 00:03:25.764 SO libspdk_accel.so.16.0 00:03:25.764 LIB libspdk_nvme.a 00:03:25.764 SYMLINK libspdk_virtio.so 00:03:25.764 SYMLINK libspdk_accel.so 00:03:25.764 CC lib/event/app.o 00:03:25.764 CC lib/event/reactor.o 00:03:25.764 CC lib/event/app_rpc.o 00:03:25.764 CC lib/event/log_rpc.o 00:03:25.764 LIB libspdk_fsdev.a 00:03:25.764 CC lib/event/scheduler_static.o 00:03:25.764 SO libspdk_nvme.so.14.0 00:03:26.070 SO libspdk_fsdev.so.1.0 00:03:26.070 SYMLINK libspdk_fsdev.so 00:03:26.070 CC lib/bdev/bdev_rpc.o 00:03:26.070 CC lib/bdev/bdev.o 00:03:26.071 CC lib/bdev/bdev_zone.o 00:03:26.071 CC lib/bdev/part.o 00:03:26.071 SYMLINK libspdk_nvme.so 00:03:26.071 CC lib/bdev/scsi_nvme.o 00:03:26.071 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:26.334 LIB libspdk_event.a 00:03:26.592 SO libspdk_event.so.14.0 00:03:26.592 SYMLINK libspdk_event.so 00:03:26.851 LIB libspdk_fuse_dispatcher.a 00:03:26.851 SO libspdk_fuse_dispatcher.so.1.0 00:03:27.110 SYMLINK libspdk_fuse_dispatcher.so 00:03:28.488 LIB libspdk_blob.a 00:03:28.488 SO libspdk_blob.so.11.0 00:03:28.488 SYMLINK libspdk_blob.so 00:03:28.747 CC lib/lvol/lvol.o 00:03:28.747 CC lib/blobfs/tree.o 00:03:28.747 CC lib/blobfs/blobfs.o 00:03:29.683 LIB libspdk_bdev.a 00:03:29.683 SO libspdk_bdev.so.17.0 00:03:29.683 SYMLINK libspdk_bdev.so 00:03:29.942 LIB libspdk_blobfs.a 00:03:29.942 SO libspdk_blobfs.so.10.0 00:03:29.942 CC lib/nvmf/ctrlr.o 00:03:29.942 CC lib/nvmf/ctrlr_discovery.o 00:03:29.942 CC lib/nvmf/subsystem.o 00:03:29.942 CC lib/nvmf/ctrlr_bdev.o 00:03:29.942 CC lib/ftl/ftl_core.o 00:03:29.942 CC lib/ublk/ublk.o 00:03:29.942 CC lib/nbd/nbd.o 00:03:29.942 SYMLINK libspdk_blobfs.so 00:03:29.942 CC lib/scsi/dev.o 00:03:29.942 CC lib/scsi/lun.o 00:03:29.942 LIB libspdk_lvol.a 00:03:29.942 SO libspdk_lvol.so.10.0 00:03:30.200 SYMLINK libspdk_lvol.so 00:03:30.200 CC lib/nbd/nbd_rpc.o 00:03:30.200 CC lib/scsi/port.o 00:03:30.200 CC lib/scsi/scsi.o 00:03:30.459 CC lib/nvmf/nvmf.o 00:03:30.459 CC lib/nvmf/nvmf_rpc.o 00:03:30.459 CC lib/scsi/scsi_bdev.o 00:03:30.459 CC lib/ftl/ftl_init.o 00:03:30.459 LIB libspdk_nbd.a 00:03:30.459 SO libspdk_nbd.so.7.0 00:03:30.459 CC lib/nvmf/transport.o 00:03:30.459 SYMLINK libspdk_nbd.so 00:03:30.459 CC lib/nvmf/tcp.o 00:03:30.718 CC lib/ftl/ftl_layout.o 00:03:30.718 CC lib/ublk/ublk_rpc.o 00:03:30.718 CC lib/nvmf/stubs.o 00:03:30.977 LIB libspdk_ublk.a 00:03:30.977 SO libspdk_ublk.so.3.0 00:03:30.977 SYMLINK libspdk_ublk.so 00:03:30.977 CC lib/nvmf/mdns_server.o 00:03:30.977 CC lib/scsi/scsi_pr.o 00:03:30.977 CC lib/ftl/ftl_debug.o 00:03:31.236 CC lib/nvmf/rdma.o 00:03:31.236 CC lib/scsi/scsi_rpc.o 00:03:31.236 CC lib/ftl/ftl_io.o 00:03:31.494 CC lib/scsi/task.o 00:03:31.494 CC lib/nvmf/auth.o 00:03:31.494 CC lib/ftl/ftl_sb.o 00:03:31.494 CC lib/ftl/ftl_l2p.o 00:03:31.494 CC lib/ftl/ftl_l2p_flat.o 00:03:31.494 CC lib/ftl/ftl_nv_cache.o 00:03:31.494 CC lib/ftl/ftl_band.o 00:03:31.494 LIB libspdk_scsi.a 00:03:31.752 CC lib/ftl/ftl_band_ops.o 00:03:31.752 SO libspdk_scsi.so.9.0 00:03:31.752 CC lib/ftl/ftl_writer.o 00:03:31.752 SYMLINK libspdk_scsi.so 00:03:31.752 CC lib/ftl/ftl_rq.o 00:03:31.752 CC lib/ftl/ftl_reloc.o 00:03:32.010 CC lib/ftl/ftl_l2p_cache.o 00:03:32.010 CC lib/ftl/ftl_p2l.o 00:03:32.010 CC lib/ftl/ftl_p2l_log.o 00:03:32.269 CC lib/ftl/mngt/ftl_mngt.o 00:03:32.269 CC lib/iscsi/conn.o 00:03:32.269 CC lib/vhost/vhost.o 00:03:32.528 CC lib/iscsi/init_grp.o 00:03:32.528 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:32.528 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:32.528 CC lib/iscsi/iscsi.o 00:03:32.528 CC lib/vhost/vhost_rpc.o 00:03:32.787 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:32.787 CC lib/iscsi/param.o 00:03:32.787 CC lib/iscsi/portal_grp.o 00:03:32.787 CC lib/iscsi/tgt_node.o 00:03:32.787 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:32.787 CC lib/iscsi/iscsi_subsystem.o 00:03:33.046 CC lib/iscsi/iscsi_rpc.o 00:03:33.046 CC lib/iscsi/task.o 00:03:33.046 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:33.046 CC lib/vhost/vhost_scsi.o 00:03:33.046 CC lib/vhost/vhost_blk.o 00:03:33.353 CC lib/vhost/rte_vhost_user.o 00:03:33.353 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:33.353 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:33.353 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:33.353 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:33.353 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:33.353 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:33.353 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:33.614 CC lib/ftl/utils/ftl_conf.o 00:03:33.614 CC lib/ftl/utils/ftl_md.o 00:03:33.614 CC lib/ftl/utils/ftl_mempool.o 00:03:33.614 CC lib/ftl/utils/ftl_bitmap.o 00:03:33.614 CC lib/ftl/utils/ftl_property.o 00:03:33.873 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:33.873 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:33.873 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:33.873 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:33.873 LIB libspdk_nvmf.a 00:03:34.132 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:34.132 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:34.132 SO libspdk_nvmf.so.19.1 00:03:34.132 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:34.132 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:34.132 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:34.132 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:34.132 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:34.132 LIB libspdk_iscsi.a 00:03:34.132 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:34.391 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:34.391 CC lib/ftl/base/ftl_base_dev.o 00:03:34.391 LIB libspdk_vhost.a 00:03:34.391 SO libspdk_iscsi.so.8.0 00:03:34.391 CC lib/ftl/base/ftl_base_bdev.o 00:03:34.391 SYMLINK libspdk_nvmf.so 00:03:34.391 CC lib/ftl/ftl_trace.o 00:03:34.391 SO libspdk_vhost.so.8.0 00:03:34.391 SYMLINK libspdk_vhost.so 00:03:34.391 SYMLINK libspdk_iscsi.so 00:03:34.650 LIB libspdk_ftl.a 00:03:34.909 SO libspdk_ftl.so.9.0 00:03:35.168 SYMLINK libspdk_ftl.so 00:03:35.427 CC module/env_dpdk/env_dpdk_rpc.o 00:03:35.427 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:35.427 CC module/accel/dsa/accel_dsa.o 00:03:35.427 CC module/sock/posix/posix.o 00:03:35.427 CC module/accel/ioat/accel_ioat.o 00:03:35.427 CC module/accel/error/accel_error.o 00:03:35.427 CC module/blob/bdev/blob_bdev.o 00:03:35.427 CC module/keyring/file/keyring.o 00:03:35.686 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:35.686 CC module/fsdev/aio/fsdev_aio.o 00:03:35.686 LIB libspdk_env_dpdk_rpc.a 00:03:35.686 SO libspdk_env_dpdk_rpc.so.6.0 00:03:35.686 SYMLINK libspdk_env_dpdk_rpc.so 00:03:35.686 CC module/accel/error/accel_error_rpc.o 00:03:35.686 LIB libspdk_scheduler_dpdk_governor.a 00:03:35.686 CC module/keyring/file/keyring_rpc.o 00:03:35.686 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:35.686 CC module/accel/ioat/accel_ioat_rpc.o 00:03:35.686 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:35.686 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:35.686 LIB libspdk_scheduler_dynamic.a 00:03:35.686 LIB libspdk_accel_error.a 00:03:35.686 SO libspdk_scheduler_dynamic.so.4.0 00:03:35.946 SO libspdk_accel_error.so.2.0 00:03:35.946 LIB libspdk_blob_bdev.a 00:03:35.946 CC module/accel/dsa/accel_dsa_rpc.o 00:03:35.946 LIB libspdk_accel_ioat.a 00:03:35.946 SYMLINK libspdk_scheduler_dynamic.so 00:03:35.946 LIB libspdk_keyring_file.a 00:03:35.946 SO libspdk_blob_bdev.so.11.0 00:03:35.946 SO libspdk_accel_ioat.so.6.0 00:03:35.946 SYMLINK libspdk_accel_error.so 00:03:35.946 SO libspdk_keyring_file.so.2.0 00:03:35.946 CC module/fsdev/aio/linux_aio_mgr.o 00:03:35.946 CC module/scheduler/gscheduler/gscheduler.o 00:03:35.946 SYMLINK libspdk_blob_bdev.so 00:03:35.946 SYMLINK libspdk_accel_ioat.so 00:03:35.946 SYMLINK libspdk_keyring_file.so 00:03:35.946 LIB libspdk_accel_dsa.a 00:03:35.946 SO libspdk_accel_dsa.so.5.0 00:03:35.946 CC module/accel/iaa/accel_iaa.o 00:03:35.946 SYMLINK libspdk_accel_dsa.so 00:03:35.946 LIB libspdk_scheduler_gscheduler.a 00:03:35.946 CC module/keyring/linux/keyring.o 00:03:36.204 SO libspdk_scheduler_gscheduler.so.4.0 00:03:36.204 CC module/keyring/linux/keyring_rpc.o 00:03:36.204 SYMLINK libspdk_scheduler_gscheduler.so 00:03:36.204 CC module/accel/iaa/accel_iaa_rpc.o 00:03:36.204 CC module/bdev/error/vbdev_error.o 00:03:36.204 CC module/bdev/delay/vbdev_delay.o 00:03:36.204 CC module/blobfs/bdev/blobfs_bdev.o 00:03:36.204 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:36.204 CC module/bdev/gpt/gpt.o 00:03:36.204 CC module/bdev/gpt/vbdev_gpt.o 00:03:36.204 LIB libspdk_keyring_linux.a 00:03:36.204 LIB libspdk_fsdev_aio.a 00:03:36.204 LIB libspdk_accel_iaa.a 00:03:36.204 SO libspdk_keyring_linux.so.1.0 00:03:36.204 SO libspdk_accel_iaa.so.3.0 00:03:36.204 SO libspdk_fsdev_aio.so.1.0 00:03:36.463 SYMLINK libspdk_keyring_linux.so 00:03:36.463 CC module/bdev/error/vbdev_error_rpc.o 00:03:36.463 LIB libspdk_sock_posix.a 00:03:36.463 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:36.463 LIB libspdk_blobfs_bdev.a 00:03:36.463 SYMLINK libspdk_accel_iaa.so 00:03:36.463 SO libspdk_sock_posix.so.6.0 00:03:36.463 SO libspdk_blobfs_bdev.so.6.0 00:03:36.463 SYMLINK libspdk_fsdev_aio.so 00:03:36.463 SYMLINK libspdk_blobfs_bdev.so 00:03:36.463 SYMLINK libspdk_sock_posix.so 00:03:36.463 LIB libspdk_bdev_error.a 00:03:36.463 LIB libspdk_bdev_gpt.a 00:03:36.463 SO libspdk_bdev_error.so.6.0 00:03:36.463 SO libspdk_bdev_gpt.so.6.0 00:03:36.463 CC module/bdev/lvol/vbdev_lvol.o 00:03:36.463 CC module/bdev/malloc/bdev_malloc.o 00:03:36.463 LIB libspdk_bdev_delay.a 00:03:36.463 CC module/bdev/null/bdev_null.o 00:03:36.722 SO libspdk_bdev_delay.so.6.0 00:03:36.722 CC module/bdev/nvme/bdev_nvme.o 00:03:36.722 SYMLINK libspdk_bdev_gpt.so 00:03:36.722 SYMLINK libspdk_bdev_error.so 00:03:36.722 CC module/bdev/raid/bdev_raid.o 00:03:36.722 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:36.722 CC module/bdev/null/bdev_null_rpc.o 00:03:36.722 CC module/bdev/passthru/vbdev_passthru.o 00:03:36.722 SYMLINK libspdk_bdev_delay.so 00:03:36.722 CC module/bdev/split/vbdev_split.o 00:03:36.722 CC module/bdev/nvme/nvme_rpc.o 00:03:36.722 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:36.722 LIB libspdk_bdev_null.a 00:03:36.981 SO libspdk_bdev_null.so.6.0 00:03:36.981 CC module/bdev/split/vbdev_split_rpc.o 00:03:36.981 CC module/bdev/nvme/bdev_mdns_client.o 00:03:36.981 SYMLINK libspdk_bdev_null.so 00:03:36.981 CC module/bdev/nvme/vbdev_opal.o 00:03:36.981 LIB libspdk_bdev_malloc.a 00:03:36.981 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:36.981 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:36.981 SO libspdk_bdev_malloc.so.6.0 00:03:36.981 SYMLINK libspdk_bdev_malloc.so 00:03:36.981 CC module/bdev/raid/bdev_raid_rpc.o 00:03:36.981 LIB libspdk_bdev_split.a 00:03:36.981 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:36.981 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:36.981 SO libspdk_bdev_split.so.6.0 00:03:36.981 LIB libspdk_bdev_passthru.a 00:03:37.239 SO libspdk_bdev_passthru.so.6.0 00:03:37.239 CC module/bdev/raid/bdev_raid_sb.o 00:03:37.239 SYMLINK libspdk_bdev_split.so 00:03:37.239 CC module/bdev/raid/raid0.o 00:03:37.239 SYMLINK libspdk_bdev_passthru.so 00:03:37.239 CC module/bdev/raid/raid1.o 00:03:37.239 CC module/bdev/raid/concat.o 00:03:37.239 CC module/bdev/raid/raid5f.o 00:03:37.239 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:37.498 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:37.498 LIB libspdk_bdev_lvol.a 00:03:37.498 SO libspdk_bdev_lvol.so.6.0 00:03:37.498 CC module/bdev/aio/bdev_aio.o 00:03:37.498 SYMLINK libspdk_bdev_lvol.so 00:03:37.498 CC module/bdev/aio/bdev_aio_rpc.o 00:03:37.498 CC module/bdev/ftl/bdev_ftl.o 00:03:37.498 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:37.498 CC module/bdev/iscsi/bdev_iscsi.o 00:03:37.756 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:37.756 LIB libspdk_bdev_zone_block.a 00:03:37.756 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:37.756 SO libspdk_bdev_zone_block.so.6.0 00:03:37.756 SYMLINK libspdk_bdev_zone_block.so 00:03:37.756 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:37.756 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:37.756 LIB libspdk_bdev_raid.a 00:03:37.756 SO libspdk_bdev_raid.so.6.0 00:03:37.756 LIB libspdk_bdev_aio.a 00:03:37.756 LIB libspdk_bdev_ftl.a 00:03:37.756 SO libspdk_bdev_aio.so.6.0 00:03:38.016 SO libspdk_bdev_ftl.so.6.0 00:03:38.016 SYMLINK libspdk_bdev_raid.so 00:03:38.016 SYMLINK libspdk_bdev_aio.so 00:03:38.016 LIB libspdk_bdev_iscsi.a 00:03:38.016 SYMLINK libspdk_bdev_ftl.so 00:03:38.016 SO libspdk_bdev_iscsi.so.6.0 00:03:38.016 SYMLINK libspdk_bdev_iscsi.so 00:03:38.275 LIB libspdk_bdev_virtio.a 00:03:38.275 SO libspdk_bdev_virtio.so.6.0 00:03:38.275 SYMLINK libspdk_bdev_virtio.so 00:03:39.213 LIB libspdk_bdev_nvme.a 00:03:39.213 SO libspdk_bdev_nvme.so.7.0 00:03:39.213 SYMLINK libspdk_bdev_nvme.so 00:03:39.782 CC module/event/subsystems/fsdev/fsdev.o 00:03:39.782 CC module/event/subsystems/sock/sock.o 00:03:39.782 CC module/event/subsystems/keyring/keyring.o 00:03:39.782 CC module/event/subsystems/iobuf/iobuf.o 00:03:39.782 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:39.782 CC module/event/subsystems/scheduler/scheduler.o 00:03:39.782 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:39.782 CC module/event/subsystems/vmd/vmd.o 00:03:39.782 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:40.042 LIB libspdk_event_fsdev.a 00:03:40.042 LIB libspdk_event_sock.a 00:03:40.042 LIB libspdk_event_vhost_blk.a 00:03:40.042 LIB libspdk_event_scheduler.a 00:03:40.042 LIB libspdk_event_vmd.a 00:03:40.042 LIB libspdk_event_keyring.a 00:03:40.042 LIB libspdk_event_iobuf.a 00:03:40.042 SO libspdk_event_fsdev.so.1.0 00:03:40.042 SO libspdk_event_sock.so.5.0 00:03:40.042 SO libspdk_event_vhost_blk.so.3.0 00:03:40.042 SO libspdk_event_scheduler.so.4.0 00:03:40.042 SO libspdk_event_keyring.so.1.0 00:03:40.042 SO libspdk_event_vmd.so.6.0 00:03:40.042 SO libspdk_event_iobuf.so.3.0 00:03:40.042 SYMLINK libspdk_event_fsdev.so 00:03:40.042 SYMLINK libspdk_event_sock.so 00:03:40.042 SYMLINK libspdk_event_vhost_blk.so 00:03:40.042 SYMLINK libspdk_event_scheduler.so 00:03:40.042 SYMLINK libspdk_event_keyring.so 00:03:40.042 SYMLINK libspdk_event_iobuf.so 00:03:40.042 SYMLINK libspdk_event_vmd.so 00:03:40.622 CC module/event/subsystems/accel/accel.o 00:03:40.622 LIB libspdk_event_accel.a 00:03:40.622 SO libspdk_event_accel.so.6.0 00:03:40.622 SYMLINK libspdk_event_accel.so 00:03:41.191 CC module/event/subsystems/bdev/bdev.o 00:03:41.191 LIB libspdk_event_bdev.a 00:03:41.191 SO libspdk_event_bdev.so.6.0 00:03:41.451 SYMLINK libspdk_event_bdev.so 00:03:41.711 CC module/event/subsystems/scsi/scsi.o 00:03:41.711 CC module/event/subsystems/ublk/ublk.o 00:03:41.711 CC module/event/subsystems/nbd/nbd.o 00:03:41.711 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:41.711 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:41.970 LIB libspdk_event_ublk.a 00:03:41.970 LIB libspdk_event_nbd.a 00:03:41.970 LIB libspdk_event_scsi.a 00:03:41.970 SO libspdk_event_nbd.so.6.0 00:03:41.970 SO libspdk_event_ublk.so.3.0 00:03:41.970 SO libspdk_event_scsi.so.6.0 00:03:41.970 SYMLINK libspdk_event_ublk.so 00:03:41.970 SYMLINK libspdk_event_nbd.so 00:03:41.970 LIB libspdk_event_nvmf.a 00:03:41.970 SYMLINK libspdk_event_scsi.so 00:03:41.970 SO libspdk_event_nvmf.so.6.0 00:03:41.970 SYMLINK libspdk_event_nvmf.so 00:03:42.232 CC module/event/subsystems/iscsi/iscsi.o 00:03:42.232 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:42.494 LIB libspdk_event_iscsi.a 00:03:42.494 LIB libspdk_event_vhost_scsi.a 00:03:42.494 SO libspdk_event_vhost_scsi.so.3.0 00:03:42.494 SO libspdk_event_iscsi.so.6.0 00:03:42.494 SYMLINK libspdk_event_vhost_scsi.so 00:03:42.494 SYMLINK libspdk_event_iscsi.so 00:03:42.754 SO libspdk.so.6.0 00:03:42.754 SYMLINK libspdk.so 00:03:43.020 CC app/trace_record/trace_record.o 00:03:43.020 TEST_HEADER include/spdk/accel.h 00:03:43.020 TEST_HEADER include/spdk/accel_module.h 00:03:43.020 TEST_HEADER include/spdk/assert.h 00:03:43.020 CXX app/trace/trace.o 00:03:43.020 TEST_HEADER include/spdk/barrier.h 00:03:43.020 TEST_HEADER include/spdk/base64.h 00:03:43.020 TEST_HEADER include/spdk/bdev.h 00:03:43.020 TEST_HEADER include/spdk/bdev_module.h 00:03:43.020 TEST_HEADER include/spdk/bdev_zone.h 00:03:43.020 TEST_HEADER include/spdk/bit_array.h 00:03:43.020 TEST_HEADER include/spdk/bit_pool.h 00:03:43.020 TEST_HEADER include/spdk/blob_bdev.h 00:03:43.020 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:43.020 TEST_HEADER include/spdk/blobfs.h 00:03:43.020 TEST_HEADER include/spdk/blob.h 00:03:43.020 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:43.020 TEST_HEADER include/spdk/conf.h 00:03:43.020 TEST_HEADER include/spdk/config.h 00:03:43.020 CC app/nvmf_tgt/nvmf_main.o 00:03:43.020 TEST_HEADER include/spdk/cpuset.h 00:03:43.020 TEST_HEADER include/spdk/crc16.h 00:03:43.020 TEST_HEADER include/spdk/crc32.h 00:03:43.020 TEST_HEADER include/spdk/crc64.h 00:03:43.020 TEST_HEADER include/spdk/dif.h 00:03:43.020 TEST_HEADER include/spdk/dma.h 00:03:43.020 TEST_HEADER include/spdk/endian.h 00:03:43.020 TEST_HEADER include/spdk/env_dpdk.h 00:03:43.020 TEST_HEADER include/spdk/env.h 00:03:43.020 TEST_HEADER include/spdk/event.h 00:03:43.020 TEST_HEADER include/spdk/fd_group.h 00:03:43.020 TEST_HEADER include/spdk/fd.h 00:03:43.020 TEST_HEADER include/spdk/file.h 00:03:43.020 TEST_HEADER include/spdk/fsdev.h 00:03:43.020 TEST_HEADER include/spdk/fsdev_module.h 00:03:43.020 CC examples/ioat/perf/perf.o 00:03:43.020 TEST_HEADER include/spdk/ftl.h 00:03:43.020 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:43.020 CC test/thread/poller_perf/poller_perf.o 00:03:43.020 TEST_HEADER include/spdk/gpt_spec.h 00:03:43.020 TEST_HEADER include/spdk/hexlify.h 00:03:43.020 TEST_HEADER include/spdk/histogram_data.h 00:03:43.290 TEST_HEADER include/spdk/idxd.h 00:03:43.290 TEST_HEADER include/spdk/idxd_spec.h 00:03:43.290 CC examples/util/zipf/zipf.o 00:03:43.290 TEST_HEADER include/spdk/init.h 00:03:43.290 TEST_HEADER include/spdk/ioat.h 00:03:43.290 TEST_HEADER include/spdk/ioat_spec.h 00:03:43.290 TEST_HEADER include/spdk/iscsi_spec.h 00:03:43.290 TEST_HEADER include/spdk/json.h 00:03:43.290 TEST_HEADER include/spdk/jsonrpc.h 00:03:43.290 TEST_HEADER include/spdk/keyring.h 00:03:43.290 TEST_HEADER include/spdk/keyring_module.h 00:03:43.290 TEST_HEADER include/spdk/likely.h 00:03:43.290 TEST_HEADER include/spdk/log.h 00:03:43.290 CC test/dma/test_dma/test_dma.o 00:03:43.290 TEST_HEADER include/spdk/lvol.h 00:03:43.290 TEST_HEADER include/spdk/md5.h 00:03:43.290 TEST_HEADER include/spdk/memory.h 00:03:43.290 TEST_HEADER include/spdk/mmio.h 00:03:43.290 TEST_HEADER include/spdk/nbd.h 00:03:43.290 TEST_HEADER include/spdk/net.h 00:03:43.290 TEST_HEADER include/spdk/notify.h 00:03:43.290 TEST_HEADER include/spdk/nvme.h 00:03:43.290 TEST_HEADER include/spdk/nvme_intel.h 00:03:43.290 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:43.290 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:43.290 TEST_HEADER include/spdk/nvme_spec.h 00:03:43.290 TEST_HEADER include/spdk/nvme_zns.h 00:03:43.290 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:43.290 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:43.290 TEST_HEADER include/spdk/nvmf.h 00:03:43.290 TEST_HEADER include/spdk/nvmf_spec.h 00:03:43.290 TEST_HEADER include/spdk/nvmf_transport.h 00:03:43.290 TEST_HEADER include/spdk/opal.h 00:03:43.290 CC test/app/bdev_svc/bdev_svc.o 00:03:43.290 TEST_HEADER include/spdk/opal_spec.h 00:03:43.290 TEST_HEADER include/spdk/pci_ids.h 00:03:43.290 TEST_HEADER include/spdk/pipe.h 00:03:43.290 TEST_HEADER include/spdk/queue.h 00:03:43.290 TEST_HEADER include/spdk/reduce.h 00:03:43.290 TEST_HEADER include/spdk/rpc.h 00:03:43.290 TEST_HEADER include/spdk/scheduler.h 00:03:43.290 TEST_HEADER include/spdk/scsi.h 00:03:43.290 TEST_HEADER include/spdk/scsi_spec.h 00:03:43.290 TEST_HEADER include/spdk/sock.h 00:03:43.290 TEST_HEADER include/spdk/stdinc.h 00:03:43.290 TEST_HEADER include/spdk/string.h 00:03:43.290 TEST_HEADER include/spdk/thread.h 00:03:43.290 TEST_HEADER include/spdk/trace.h 00:03:43.290 TEST_HEADER include/spdk/trace_parser.h 00:03:43.291 TEST_HEADER include/spdk/tree.h 00:03:43.291 TEST_HEADER include/spdk/ublk.h 00:03:43.291 TEST_HEADER include/spdk/util.h 00:03:43.291 TEST_HEADER include/spdk/uuid.h 00:03:43.291 TEST_HEADER include/spdk/version.h 00:03:43.291 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:43.291 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:43.291 TEST_HEADER include/spdk/vhost.h 00:03:43.291 TEST_HEADER include/spdk/vmd.h 00:03:43.291 TEST_HEADER include/spdk/xor.h 00:03:43.291 TEST_HEADER include/spdk/zipf.h 00:03:43.291 CXX test/cpp_headers/accel.o 00:03:43.291 LINK interrupt_tgt 00:03:43.291 LINK poller_perf 00:03:43.291 LINK nvmf_tgt 00:03:43.291 LINK zipf 00:03:43.291 LINK spdk_trace_record 00:03:43.291 LINK bdev_svc 00:03:43.291 LINK ioat_perf 00:03:43.291 CXX test/cpp_headers/accel_module.o 00:03:43.564 CXX test/cpp_headers/assert.o 00:03:43.564 LINK spdk_trace 00:03:43.564 CXX test/cpp_headers/barrier.o 00:03:43.564 CC examples/ioat/verify/verify.o 00:03:43.564 CXX test/cpp_headers/base64.o 00:03:43.564 CC test/event/event_perf/event_perf.o 00:03:43.564 CC test/env/vtophys/vtophys.o 00:03:43.564 LINK test_dma 00:03:43.564 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:43.564 CC examples/thread/thread/thread_ex.o 00:03:43.564 CC test/env/mem_callbacks/mem_callbacks.o 00:03:43.833 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:43.833 CC app/iscsi_tgt/iscsi_tgt.o 00:03:43.833 CXX test/cpp_headers/bdev.o 00:03:43.833 LINK event_perf 00:03:43.833 LINK verify 00:03:43.833 LINK vtophys 00:03:43.833 LINK env_dpdk_post_init 00:03:43.833 LINK thread 00:03:43.833 LINK iscsi_tgt 00:03:43.833 CXX test/cpp_headers/bdev_module.o 00:03:43.833 CC test/rpc_client/rpc_client_test.o 00:03:44.092 CC test/event/reactor/reactor.o 00:03:44.092 CC test/app/jsoncat/jsoncat.o 00:03:44.092 CC test/app/histogram_perf/histogram_perf.o 00:03:44.092 CC test/env/memory/memory_ut.o 00:03:44.092 CXX test/cpp_headers/bdev_zone.o 00:03:44.092 LINK nvme_fuzz 00:03:44.092 LINK jsoncat 00:03:44.092 LINK reactor 00:03:44.092 LINK rpc_client_test 00:03:44.092 LINK histogram_perf 00:03:44.092 LINK mem_callbacks 00:03:44.359 CC examples/sock/hello_world/hello_sock.o 00:03:44.359 CC app/spdk_tgt/spdk_tgt.o 00:03:44.359 CXX test/cpp_headers/bit_array.o 00:03:44.359 CC test/event/reactor_perf/reactor_perf.o 00:03:44.359 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:44.359 CC test/env/pci/pci_ut.o 00:03:44.359 CC test/event/app_repeat/app_repeat.o 00:03:44.359 CXX test/cpp_headers/bit_pool.o 00:03:44.359 LINK spdk_tgt 00:03:44.359 CC test/accel/dif/dif.o 00:03:44.359 CC test/blobfs/mkfs/mkfs.o 00:03:44.359 LINK reactor_perf 00:03:44.359 LINK hello_sock 00:03:44.621 LINK app_repeat 00:03:44.621 CXX test/cpp_headers/blob_bdev.o 00:03:44.621 CXX test/cpp_headers/blobfs_bdev.o 00:03:44.621 LINK mkfs 00:03:44.621 CC app/spdk_lspci/spdk_lspci.o 00:03:44.880 LINK pci_ut 00:03:44.880 CC examples/vmd/lsvmd/lsvmd.o 00:03:44.880 CXX test/cpp_headers/blobfs.o 00:03:44.880 CC examples/vmd/led/led.o 00:03:44.880 LINK spdk_lspci 00:03:44.880 CC test/event/scheduler/scheduler.o 00:03:44.880 LINK lsvmd 00:03:45.140 LINK led 00:03:45.140 CXX test/cpp_headers/blob.o 00:03:45.140 LINK scheduler 00:03:45.140 CC test/lvol/esnap/esnap.o 00:03:45.140 CC app/spdk_nvme_perf/perf.o 00:03:45.140 CXX test/cpp_headers/conf.o 00:03:45.140 CC app/spdk_nvme_identify/identify.o 00:03:45.140 LINK memory_ut 00:03:45.140 CC app/spdk_nvme_discover/discovery_aer.o 00:03:45.140 LINK dif 00:03:45.400 CXX test/cpp_headers/config.o 00:03:45.400 CC examples/idxd/perf/perf.o 00:03:45.400 CXX test/cpp_headers/cpuset.o 00:03:45.400 CC app/spdk_top/spdk_top.o 00:03:45.400 LINK spdk_nvme_discover 00:03:45.400 CXX test/cpp_headers/crc16.o 00:03:45.400 CC app/vhost/vhost.o 00:03:45.400 CC app/spdk_dd/spdk_dd.o 00:03:45.660 CXX test/cpp_headers/crc32.o 00:03:45.660 LINK vhost 00:03:45.660 LINK idxd_perf 00:03:45.660 CC app/fio/nvme/fio_plugin.o 00:03:45.922 CXX test/cpp_headers/crc64.o 00:03:45.922 LINK spdk_dd 00:03:45.922 CXX test/cpp_headers/dif.o 00:03:45.922 CC app/fio/bdev/fio_plugin.o 00:03:45.922 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:46.183 LINK spdk_nvme_perf 00:03:46.183 CXX test/cpp_headers/dma.o 00:03:46.183 LINK spdk_nvme_identify 00:03:46.183 CXX test/cpp_headers/endian.o 00:03:46.183 CC test/nvme/aer/aer.o 00:03:46.183 LINK iscsi_fuzz 00:03:46.183 CC test/nvme/reset/reset.o 00:03:46.183 LINK hello_fsdev 00:03:46.442 LINK spdk_nvme 00:03:46.442 LINK spdk_top 00:03:46.443 CXX test/cpp_headers/env_dpdk.o 00:03:46.443 CC test/bdev/bdevio/bdevio.o 00:03:46.443 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:46.443 LINK spdk_bdev 00:03:46.443 CC test/app/stub/stub.o 00:03:46.443 CXX test/cpp_headers/env.o 00:03:46.443 LINK aer 00:03:46.443 LINK reset 00:03:46.702 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:46.702 CC examples/accel/perf/accel_perf.o 00:03:46.702 CXX test/cpp_headers/event.o 00:03:46.702 LINK stub 00:03:46.702 CC examples/blob/cli/blobcli.o 00:03:46.702 CC examples/blob/hello_world/hello_blob.o 00:03:46.702 CC test/nvme/sgl/sgl.o 00:03:46.702 CC test/nvme/e2edp/nvme_dp.o 00:03:46.702 CXX test/cpp_headers/fd_group.o 00:03:46.702 CXX test/cpp_headers/fd.o 00:03:46.961 LINK bdevio 00:03:46.961 LINK hello_blob 00:03:46.961 CXX test/cpp_headers/file.o 00:03:46.961 LINK vhost_fuzz 00:03:46.961 CXX test/cpp_headers/fsdev.o 00:03:46.961 LINK sgl 00:03:46.961 LINK nvme_dp 00:03:47.221 CC examples/nvme/hello_world/hello_world.o 00:03:47.221 LINK accel_perf 00:03:47.221 CXX test/cpp_headers/fsdev_module.o 00:03:47.221 LINK blobcli 00:03:47.221 CC examples/nvme/reconnect/reconnect.o 00:03:47.221 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:47.221 CC test/nvme/overhead/overhead.o 00:03:47.221 CC test/nvme/err_injection/err_injection.o 00:03:47.221 CC test/nvme/startup/startup.o 00:03:47.221 LINK hello_world 00:03:47.221 CXX test/cpp_headers/ftl.o 00:03:47.221 CXX test/cpp_headers/fuse_dispatcher.o 00:03:47.481 LINK err_injection 00:03:47.481 LINK startup 00:03:47.481 CC test/nvme/reserve/reserve.o 00:03:47.481 CXX test/cpp_headers/gpt_spec.o 00:03:47.481 LINK overhead 00:03:47.481 CC test/nvme/simple_copy/simple_copy.o 00:03:47.481 CC test/nvme/connect_stress/connect_stress.o 00:03:47.481 LINK reconnect 00:03:47.481 CXX test/cpp_headers/hexlify.o 00:03:47.741 CXX test/cpp_headers/histogram_data.o 00:03:47.741 CXX test/cpp_headers/idxd.o 00:03:47.741 LINK reserve 00:03:47.741 LINK nvme_manage 00:03:47.741 LINK connect_stress 00:03:47.741 LINK simple_copy 00:03:47.741 CC examples/nvme/arbitration/arbitration.o 00:03:47.741 CXX test/cpp_headers/idxd_spec.o 00:03:47.741 CC examples/nvme/hotplug/hotplug.o 00:03:47.741 CC examples/bdev/hello_world/hello_bdev.o 00:03:47.741 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:48.000 CC test/nvme/boot_partition/boot_partition.o 00:03:48.000 CC examples/nvme/abort/abort.o 00:03:48.000 CC test/nvme/compliance/nvme_compliance.o 00:03:48.000 CXX test/cpp_headers/init.o 00:03:48.000 LINK cmb_copy 00:03:48.000 CC examples/bdev/bdevperf/bdevperf.o 00:03:48.000 LINK hello_bdev 00:03:48.000 LINK hotplug 00:03:48.000 LINK boot_partition 00:03:48.260 CXX test/cpp_headers/ioat.o 00:03:48.260 LINK arbitration 00:03:48.260 CXX test/cpp_headers/ioat_spec.o 00:03:48.260 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:48.260 CXX test/cpp_headers/iscsi_spec.o 00:03:48.260 CXX test/cpp_headers/json.o 00:03:48.260 CC test/nvme/fused_ordering/fused_ordering.o 00:03:48.260 LINK nvme_compliance 00:03:48.260 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:48.260 LINK abort 00:03:48.520 LINK pmr_persistence 00:03:48.520 CXX test/cpp_headers/jsonrpc.o 00:03:48.520 CXX test/cpp_headers/keyring.o 00:03:48.520 CC test/nvme/fdp/fdp.o 00:03:48.520 CC test/nvme/cuse/cuse.o 00:03:48.520 LINK fused_ordering 00:03:48.520 LINK doorbell_aers 00:03:48.520 CXX test/cpp_headers/keyring_module.o 00:03:48.520 CXX test/cpp_headers/likely.o 00:03:48.520 CXX test/cpp_headers/log.o 00:03:48.520 CXX test/cpp_headers/lvol.o 00:03:48.781 CXX test/cpp_headers/md5.o 00:03:48.781 CXX test/cpp_headers/memory.o 00:03:48.781 CXX test/cpp_headers/mmio.o 00:03:48.781 CXX test/cpp_headers/nbd.o 00:03:48.781 CXX test/cpp_headers/net.o 00:03:48.781 CXX test/cpp_headers/notify.o 00:03:48.781 CXX test/cpp_headers/nvme.o 00:03:48.781 CXX test/cpp_headers/nvme_intel.o 00:03:48.781 CXX test/cpp_headers/nvme_ocssd.o 00:03:48.781 LINK fdp 00:03:48.781 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:48.781 LINK bdevperf 00:03:49.040 CXX test/cpp_headers/nvme_spec.o 00:03:49.040 CXX test/cpp_headers/nvme_zns.o 00:03:49.040 CXX test/cpp_headers/nvmf_cmd.o 00:03:49.040 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:49.040 CXX test/cpp_headers/nvmf.o 00:03:49.040 CXX test/cpp_headers/nvmf_spec.o 00:03:49.040 CXX test/cpp_headers/nvmf_transport.o 00:03:49.040 CXX test/cpp_headers/opal.o 00:03:49.040 CXX test/cpp_headers/opal_spec.o 00:03:49.040 CXX test/cpp_headers/pci_ids.o 00:03:49.040 CXX test/cpp_headers/pipe.o 00:03:49.298 CXX test/cpp_headers/queue.o 00:03:49.298 CXX test/cpp_headers/reduce.o 00:03:49.298 CXX test/cpp_headers/rpc.o 00:03:49.298 CXX test/cpp_headers/scheduler.o 00:03:49.298 CXX test/cpp_headers/scsi.o 00:03:49.298 CXX test/cpp_headers/scsi_spec.o 00:03:49.298 CC examples/nvmf/nvmf/nvmf.o 00:03:49.299 CXX test/cpp_headers/sock.o 00:03:49.299 CXX test/cpp_headers/stdinc.o 00:03:49.299 CXX test/cpp_headers/string.o 00:03:49.299 CXX test/cpp_headers/thread.o 00:03:49.299 CXX test/cpp_headers/trace.o 00:03:49.299 CXX test/cpp_headers/trace_parser.o 00:03:49.299 CXX test/cpp_headers/tree.o 00:03:49.558 CXX test/cpp_headers/ublk.o 00:03:49.558 CXX test/cpp_headers/util.o 00:03:49.558 CXX test/cpp_headers/uuid.o 00:03:49.558 CXX test/cpp_headers/version.o 00:03:49.558 CXX test/cpp_headers/vfio_user_pci.o 00:03:49.558 CXX test/cpp_headers/vfio_user_spec.o 00:03:49.558 CXX test/cpp_headers/vhost.o 00:03:49.558 CXX test/cpp_headers/vmd.o 00:03:49.558 LINK nvmf 00:03:49.558 CXX test/cpp_headers/xor.o 00:03:49.558 CXX test/cpp_headers/zipf.o 00:03:49.817 LINK cuse 00:03:51.194 LINK esnap 00:03:51.454 00:03:51.454 real 1m24.686s 00:03:51.454 user 7m18.450s 00:03:51.454 sys 1m32.841s 00:03:51.454 09:48:27 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:51.454 09:48:27 make -- common/autotest_common.sh@10 -- $ set +x 00:03:51.454 ************************************ 00:03:51.454 END TEST make 00:03:51.454 ************************************ 00:03:51.454 09:48:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:51.454 09:48:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:51.454 09:48:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:51.454 09:48:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.454 09:48:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:51.454 09:48:27 -- pm/common@44 -- $ pid=5451 00:03:51.454 09:48:27 -- pm/common@50 -- $ kill -TERM 5451 00:03:51.454 09:48:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.454 09:48:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:51.454 09:48:27 -- pm/common@44 -- $ pid=5452 00:03:51.454 09:48:27 -- pm/common@50 -- $ kill -TERM 5452 00:03:51.454 09:48:28 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:51.454 09:48:28 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:51.454 09:48:28 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:51.714 09:48:28 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:51.714 09:48:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.714 09:48:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.714 09:48:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.714 09:48:28 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.714 09:48:28 -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.714 09:48:28 -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.714 09:48:28 -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.714 09:48:28 -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.714 09:48:28 -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.714 09:48:28 -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.714 09:48:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.714 09:48:28 -- scripts/common.sh@344 -- # case "$op" in 00:03:51.714 09:48:28 -- scripts/common.sh@345 -- # : 1 00:03:51.714 09:48:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.714 09:48:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.714 09:48:28 -- scripts/common.sh@365 -- # decimal 1 00:03:51.714 09:48:28 -- scripts/common.sh@353 -- # local d=1 00:03:51.714 09:48:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.714 09:48:28 -- scripts/common.sh@355 -- # echo 1 00:03:51.714 09:48:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.714 09:48:28 -- scripts/common.sh@366 -- # decimal 2 00:03:51.714 09:48:28 -- scripts/common.sh@353 -- # local d=2 00:03:51.714 09:48:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.714 09:48:28 -- scripts/common.sh@355 -- # echo 2 00:03:51.714 09:48:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.714 09:48:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.714 09:48:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.714 09:48:28 -- scripts/common.sh@368 -- # return 0 00:03:51.714 09:48:28 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.714 09:48:28 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:51.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.714 --rc genhtml_branch_coverage=1 00:03:51.714 --rc genhtml_function_coverage=1 00:03:51.714 --rc genhtml_legend=1 00:03:51.714 --rc geninfo_all_blocks=1 00:03:51.714 --rc geninfo_unexecuted_blocks=1 00:03:51.714 00:03:51.714 ' 00:03:51.714 09:48:28 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:51.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.714 --rc genhtml_branch_coverage=1 00:03:51.714 --rc genhtml_function_coverage=1 00:03:51.714 --rc genhtml_legend=1 00:03:51.714 --rc geninfo_all_blocks=1 00:03:51.714 --rc geninfo_unexecuted_blocks=1 00:03:51.714 00:03:51.714 ' 00:03:51.714 09:48:28 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:51.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.714 --rc genhtml_branch_coverage=1 00:03:51.714 --rc genhtml_function_coverage=1 00:03:51.714 --rc genhtml_legend=1 00:03:51.714 --rc geninfo_all_blocks=1 00:03:51.714 --rc geninfo_unexecuted_blocks=1 00:03:51.714 00:03:51.714 ' 00:03:51.714 09:48:28 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:51.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.714 --rc genhtml_branch_coverage=1 00:03:51.714 --rc genhtml_function_coverage=1 00:03:51.714 --rc genhtml_legend=1 00:03:51.715 --rc geninfo_all_blocks=1 00:03:51.715 --rc geninfo_unexecuted_blocks=1 00:03:51.715 00:03:51.715 ' 00:03:51.715 09:48:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:51.715 09:48:28 -- nvmf/common.sh@7 -- # uname -s 00:03:51.715 09:48:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.715 09:48:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.715 09:48:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.715 09:48:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.715 09:48:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.715 09:48:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.715 09:48:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.715 09:48:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.715 09:48:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.715 09:48:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.715 09:48:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc3e89c5-a0c9-4b43-b383-a6b5a161abf4 00:03:51.715 09:48:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=bc3e89c5-a0c9-4b43-b383-a6b5a161abf4 00:03:51.715 09:48:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.715 09:48:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.715 09:48:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:51.715 09:48:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:51.715 09:48:28 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:51.715 09:48:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:51.715 09:48:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.715 09:48:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.715 09:48:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.715 09:48:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.715 09:48:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.715 09:48:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.715 09:48:28 -- paths/export.sh@5 -- # export PATH 00:03:51.715 09:48:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.715 09:48:28 -- nvmf/common.sh@51 -- # : 0 00:03:51.715 09:48:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:51.715 09:48:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:51.715 09:48:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:51.715 09:48:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.715 09:48:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.715 09:48:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:51.715 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:51.715 09:48:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:51.715 09:48:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:51.715 09:48:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:51.715 09:48:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:51.715 09:48:28 -- spdk/autotest.sh@32 -- # uname -s 00:03:51.715 09:48:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:51.715 09:48:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:51.715 09:48:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:51.715 09:48:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:51.715 09:48:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:51.715 09:48:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:51.715 09:48:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:51.715 09:48:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:51.715 09:48:28 -- spdk/autotest.sh@48 -- # udevadm_pid=53970 00:03:51.715 09:48:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:51.715 09:48:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:51.715 09:48:28 -- pm/common@17 -- # local monitor 00:03:51.715 09:48:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.715 09:48:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.715 09:48:28 -- pm/common@25 -- # sleep 1 00:03:51.715 09:48:28 -- pm/common@21 -- # date +%s 00:03:51.715 09:48:28 -- pm/common@21 -- # date +%s 00:03:51.715 09:48:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729504108 00:03:51.715 09:48:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729504108 00:03:51.715 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729504108_collect-vmstat.pm.log 00:03:51.715 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729504108_collect-cpu-load.pm.log 00:03:52.655 09:48:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:52.655 09:48:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:52.655 09:48:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:52.655 09:48:29 -- common/autotest_common.sh@10 -- # set +x 00:03:52.655 09:48:29 -- spdk/autotest.sh@59 -- # create_test_list 00:03:52.655 09:48:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:52.655 09:48:29 -- common/autotest_common.sh@10 -- # set +x 00:03:52.915 09:48:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:52.915 09:48:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:52.915 09:48:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:52.915 09:48:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:52.915 09:48:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:52.915 09:48:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:52.915 09:48:29 -- common/autotest_common.sh@1455 -- # uname 00:03:52.915 09:48:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:52.915 09:48:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:52.915 09:48:29 -- common/autotest_common.sh@1475 -- # uname 00:03:52.915 09:48:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:52.915 09:48:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:52.915 09:48:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:52.915 lcov: LCOV version 1.15 00:03:52.915 09:48:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:07.824 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:07.824 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:22.723 09:48:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:22.723 09:48:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.723 09:48:58 -- common/autotest_common.sh@10 -- # set +x 00:04:22.723 09:48:58 -- spdk/autotest.sh@78 -- # rm -f 00:04:22.723 09:48:58 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.723 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:22.723 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:22.723 09:48:59 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:22.723 09:48:59 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:22.723 09:48:59 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:22.723 09:48:59 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:22.723 09:48:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:22.723 09:48:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:22.723 09:48:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:22.723 09:48:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.723 09:48:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:22.723 09:48:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:22.723 09:48:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:22.723 09:48:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:22.723 09:48:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:22.723 09:48:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:22.723 09:48:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:22.723 09:48:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:22.723 09:48:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:22.723 09:48:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:22.723 09:48:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:22.723 09:48:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:22.723 09:48:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:22.723 09:48:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:22.723 09:48:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:22.723 09:48:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:22.723 09:48:59 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:22.723 09:48:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.723 09:48:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:22.723 09:48:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:22.723 09:48:59 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:22.723 09:48:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:22.723 No valid GPT data, bailing 00:04:22.723 09:48:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.723 09:48:59 -- scripts/common.sh@394 -- # pt= 00:04:22.723 09:48:59 -- scripts/common.sh@395 -- # return 1 00:04:22.724 09:48:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:22.724 1+0 records in 00:04:22.724 1+0 records out 00:04:22.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00660758 s, 159 MB/s 00:04:22.724 09:48:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.724 09:48:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:22.724 09:48:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:22.724 09:48:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:22.724 09:48:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:22.724 No valid GPT data, bailing 00:04:22.724 09:48:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:22.724 09:48:59 -- scripts/common.sh@394 -- # pt= 00:04:22.724 09:48:59 -- scripts/common.sh@395 -- # return 1 00:04:22.724 09:48:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:22.724 1+0 records in 00:04:22.724 1+0 records out 00:04:22.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00689758 s, 152 MB/s 00:04:22.724 09:48:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.724 09:48:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:22.724 09:48:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:22.724 09:48:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:22.724 09:48:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:22.724 No valid GPT data, bailing 00:04:22.984 09:48:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:22.984 09:48:59 -- scripts/common.sh@394 -- # pt= 00:04:22.984 09:48:59 -- scripts/common.sh@395 -- # return 1 00:04:22.984 09:48:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:22.984 1+0 records in 00:04:22.984 1+0 records out 00:04:22.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610756 s, 172 MB/s 00:04:22.984 09:48:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.984 09:48:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:22.984 09:48:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:22.984 09:48:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:22.984 09:48:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:22.984 No valid GPT data, bailing 00:04:22.984 09:48:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:22.984 09:48:59 -- scripts/common.sh@394 -- # pt= 00:04:22.984 09:48:59 -- scripts/common.sh@395 -- # return 1 00:04:22.984 09:48:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:22.984 1+0 records in 00:04:22.984 1+0 records out 00:04:22.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418651 s, 250 MB/s 00:04:22.984 09:48:59 -- spdk/autotest.sh@105 -- # sync 00:04:24.365 09:49:00 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:24.365 09:49:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:24.365 09:49:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:26.912 09:49:03 -- spdk/autotest.sh@111 -- # uname -s 00:04:26.912 09:49:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:26.912 09:49:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:26.912 09:49:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:27.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.877 Hugepages 00:04:27.877 node hugesize free / total 00:04:27.877 node0 1048576kB 0 / 0 00:04:27.877 node0 2048kB 0 / 0 00:04:27.877 00:04:27.877 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.878 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:27.878 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:28.137 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:28.137 09:49:04 -- spdk/autotest.sh@117 -- # uname -s 00:04:28.137 09:49:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:28.137 09:49:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:28.137 09:49:04 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.074 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.074 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.074 09:49:05 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:30.453 09:49:06 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:30.453 09:49:06 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:30.453 09:49:06 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:30.453 09:49:06 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:30.453 09:49:06 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:30.453 09:49:06 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:30.453 09:49:06 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.453 09:49:06 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:30.453 09:49:06 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:30.453 09:49:06 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:30.453 09:49:06 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:30.453 09:49:06 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.713 Waiting for block devices as requested 00:04:30.973 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:30.973 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:30.973 09:49:07 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:30.973 09:49:07 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:30.973 09:49:07 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:30.973 09:49:07 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:30.973 09:49:07 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:30.973 09:49:07 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:30.973 09:49:07 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:30.973 09:49:07 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:30.973 09:49:07 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:30.973 09:49:07 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:30.973 09:49:07 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:30.973 09:49:07 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:30.973 09:49:07 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:30.973 09:49:07 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:30.973 09:49:07 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:30.973 09:49:07 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:30.973 09:49:07 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:30.973 09:49:07 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:30.973 09:49:07 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:30.973 09:49:07 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:30.973 09:49:07 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:30.973 09:49:07 -- common/autotest_common.sh@1541 -- # continue 00:04:30.973 09:49:07 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:30.973 09:49:07 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:30.973 09:49:07 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:30.973 09:49:07 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:30.973 09:49:07 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:30.973 09:49:07 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:30.973 09:49:07 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:30.973 09:49:07 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:30.973 09:49:07 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:30.973 09:49:07 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:30.973 09:49:07 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:30.973 09:49:07 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:30.973 09:49:07 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:30.973 09:49:07 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:30.973 09:49:07 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:30.973 09:49:07 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:30.973 09:49:07 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:30.973 09:49:07 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:30.973 09:49:07 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:30.973 09:49:07 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:30.973 09:49:07 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:30.973 09:49:07 -- common/autotest_common.sh@1541 -- # continue 00:04:30.973 09:49:07 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:30.973 09:49:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:30.973 09:49:07 -- common/autotest_common.sh@10 -- # set +x 00:04:31.233 09:49:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:31.233 09:49:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.233 09:49:07 -- common/autotest_common.sh@10 -- # set +x 00:04:31.233 09:49:07 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.170 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.170 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.170 09:49:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:32.170 09:49:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.171 09:49:08 -- common/autotest_common.sh@10 -- # set +x 00:04:32.171 09:49:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:32.171 09:49:08 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:32.171 09:49:08 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:32.171 09:49:08 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:32.171 09:49:08 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:32.171 09:49:08 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:32.171 09:49:08 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:32.171 09:49:08 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:32.171 09:49:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:32.171 09:49:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:32.171 09:49:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.171 09:49:08 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:32.171 09:49:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:32.171 09:49:08 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:32.171 09:49:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:32.171 09:49:08 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:32.171 09:49:08 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:32.171 09:49:08 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:32.171 09:49:08 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.171 09:49:08 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:32.171 09:49:08 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:32.430 09:49:08 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:32.430 09:49:08 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.430 09:49:08 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:32.430 09:49:08 -- common/autotest_common.sh@1570 -- # return 0 00:04:32.430 09:49:08 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:32.430 09:49:08 -- common/autotest_common.sh@1578 -- # return 0 00:04:32.430 09:49:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:32.430 09:49:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:32.430 09:49:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.430 09:49:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.430 09:49:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:32.430 09:49:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.430 09:49:08 -- common/autotest_common.sh@10 -- # set +x 00:04:32.430 09:49:08 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:32.430 09:49:08 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:32.430 09:49:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.430 09:49:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.430 09:49:08 -- common/autotest_common.sh@10 -- # set +x 00:04:32.430 ************************************ 00:04:32.430 START TEST env 00:04:32.430 ************************************ 00:04:32.430 09:49:08 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:32.430 * Looking for test storage... 00:04:32.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:32.430 09:49:08 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:32.430 09:49:08 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:32.430 09:49:08 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:32.430 09:49:08 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:32.430 09:49:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.430 09:49:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.430 09:49:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.430 09:49:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.430 09:49:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.430 09:49:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.430 09:49:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.430 09:49:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.430 09:49:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.430 09:49:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.430 09:49:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.430 09:49:09 env -- scripts/common.sh@344 -- # case "$op" in 00:04:32.430 09:49:09 env -- scripts/common.sh@345 -- # : 1 00:04:32.430 09:49:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.430 09:49:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.430 09:49:09 env -- scripts/common.sh@365 -- # decimal 1 00:04:32.430 09:49:09 env -- scripts/common.sh@353 -- # local d=1 00:04:32.430 09:49:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.430 09:49:09 env -- scripts/common.sh@355 -- # echo 1 00:04:32.430 09:49:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.430 09:49:09 env -- scripts/common.sh@366 -- # decimal 2 00:04:32.430 09:49:09 env -- scripts/common.sh@353 -- # local d=2 00:04:32.430 09:49:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.430 09:49:09 env -- scripts/common.sh@355 -- # echo 2 00:04:32.430 09:49:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.430 09:49:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.430 09:49:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.430 09:49:09 env -- scripts/common.sh@368 -- # return 0 00:04:32.430 09:49:09 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.430 09:49:09 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:32.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.430 --rc genhtml_branch_coverage=1 00:04:32.430 --rc genhtml_function_coverage=1 00:04:32.430 --rc genhtml_legend=1 00:04:32.430 --rc geninfo_all_blocks=1 00:04:32.430 --rc geninfo_unexecuted_blocks=1 00:04:32.430 00:04:32.430 ' 00:04:32.430 09:49:09 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:32.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.430 --rc genhtml_branch_coverage=1 00:04:32.430 --rc genhtml_function_coverage=1 00:04:32.430 --rc genhtml_legend=1 00:04:32.430 --rc geninfo_all_blocks=1 00:04:32.430 --rc geninfo_unexecuted_blocks=1 00:04:32.430 00:04:32.430 ' 00:04:32.430 09:49:09 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:32.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.430 --rc genhtml_branch_coverage=1 00:04:32.430 --rc genhtml_function_coverage=1 00:04:32.430 --rc genhtml_legend=1 00:04:32.430 --rc geninfo_all_blocks=1 00:04:32.430 --rc geninfo_unexecuted_blocks=1 00:04:32.430 00:04:32.430 ' 00:04:32.430 09:49:09 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:32.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.430 --rc genhtml_branch_coverage=1 00:04:32.430 --rc genhtml_function_coverage=1 00:04:32.430 --rc genhtml_legend=1 00:04:32.430 --rc geninfo_all_blocks=1 00:04:32.430 --rc geninfo_unexecuted_blocks=1 00:04:32.430 00:04:32.430 ' 00:04:32.430 09:49:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:32.430 09:49:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.430 09:49:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.430 09:49:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.689 ************************************ 00:04:32.689 START TEST env_memory 00:04:32.689 ************************************ 00:04:32.689 09:49:09 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:32.689 00:04:32.689 00:04:32.689 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.689 http://cunit.sourceforge.net/ 00:04:32.689 00:04:32.689 00:04:32.689 Suite: memory 00:04:32.689 Test: alloc and free memory map ...[2024-10-21 09:49:09.096958] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.689 passed 00:04:32.689 Test: mem map translation ...[2024-10-21 09:49:09.138348] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.689 [2024-10-21 09:49:09.138384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.689 [2024-10-21 09:49:09.138457] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.689 [2024-10-21 09:49:09.138476] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.689 passed 00:04:32.689 Test: mem map registration ...[2024-10-21 09:49:09.203162] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:32.689 [2024-10-21 09:49:09.203196] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:32.689 passed 00:04:32.949 Test: mem map adjacent registrations ...passed 00:04:32.949 00:04:32.949 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.949 suites 1 1 n/a 0 0 00:04:32.949 tests 4 4 4 0 0 00:04:32.949 asserts 152 152 152 0 n/a 00:04:32.949 00:04:32.949 Elapsed time = 0.225 seconds 00:04:32.949 00:04:32.949 real 0m0.273s 00:04:32.949 user 0m0.238s 00:04:32.949 sys 0m0.026s 00:04:32.949 09:49:09 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.949 09:49:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.949 ************************************ 00:04:32.949 END TEST env_memory 00:04:32.949 ************************************ 00:04:32.949 09:49:09 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:32.949 09:49:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.949 09:49:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.949 09:49:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.949 ************************************ 00:04:32.949 START TEST env_vtophys 00:04:32.949 ************************************ 00:04:32.949 09:49:09 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:32.949 EAL: lib.eal log level changed from notice to debug 00:04:32.949 EAL: Detected lcore 0 as core 0 on socket 0 00:04:32.949 EAL: Detected lcore 1 as core 0 on socket 0 00:04:32.949 EAL: Detected lcore 2 as core 0 on socket 0 00:04:32.949 EAL: Detected lcore 3 as core 0 on socket 0 00:04:32.949 EAL: Detected lcore 4 as core 0 on socket 0 00:04:32.949 EAL: Detected lcore 5 as core 0 on socket 0 00:04:32.949 EAL: Detected lcore 6 as core 0 on socket 0 00:04:32.949 EAL: Detected lcore 7 as core 0 on socket 0 00:04:32.949 EAL: Detected lcore 8 as core 0 on socket 0 00:04:32.949 EAL: Detected lcore 9 as core 0 on socket 0 00:04:32.949 EAL: Maximum logical cores by configuration: 128 00:04:32.949 EAL: Detected CPU lcores: 10 00:04:32.949 EAL: Detected NUMA nodes: 1 00:04:32.949 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:32.949 EAL: Detected shared linkage of DPDK 00:04:32.949 EAL: No shared files mode enabled, IPC will be disabled 00:04:32.949 EAL: Selected IOVA mode 'PA' 00:04:32.949 EAL: Probing VFIO support... 00:04:32.949 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:32.949 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:32.949 EAL: Ask a virtual area of 0x2e000 bytes 00:04:32.949 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:32.949 EAL: Setting up physically contiguous memory... 00:04:32.949 EAL: Setting maximum number of open files to 524288 00:04:32.949 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:32.949 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:32.949 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.949 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:32.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.949 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.949 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:32.949 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:32.949 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.949 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:32.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.949 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.949 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:32.949 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:32.949 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.949 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:32.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.949 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.949 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:32.949 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:32.949 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.949 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:32.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.949 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.949 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:32.949 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:32.949 EAL: Hugepages will be freed exactly as allocated. 00:04:32.949 EAL: No shared files mode enabled, IPC is disabled 00:04:32.949 EAL: No shared files mode enabled, IPC is disabled 00:04:33.208 EAL: TSC frequency is ~2290000 KHz 00:04:33.208 EAL: Main lcore 0 is ready (tid=7fb64b2d5a40;cpuset=[0]) 00:04:33.208 EAL: Trying to obtain current memory policy. 00:04:33.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.208 EAL: Restoring previous memory policy: 0 00:04:33.208 EAL: request: mp_malloc_sync 00:04:33.208 EAL: No shared files mode enabled, IPC is disabled 00:04:33.208 EAL: Heap on socket 0 was expanded by 2MB 00:04:33.208 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.208 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:33.208 EAL: Mem event callback 'spdk:(nil)' registered 00:04:33.208 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:33.208 00:04:33.208 00:04:33.208 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.208 http://cunit.sourceforge.net/ 00:04:33.208 00:04:33.208 00:04:33.208 Suite: components_suite 00:04:33.468 Test: vtophys_malloc_test ...passed 00:04:33.468 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:33.468 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.468 EAL: Restoring previous memory policy: 4 00:04:33.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.468 EAL: request: mp_malloc_sync 00:04:33.468 EAL: No shared files mode enabled, IPC is disabled 00:04:33.468 EAL: Heap on socket 0 was expanded by 4MB 00:04:33.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.468 EAL: request: mp_malloc_sync 00:04:33.468 EAL: No shared files mode enabled, IPC is disabled 00:04:33.468 EAL: Heap on socket 0 was shrunk by 4MB 00:04:33.468 EAL: Trying to obtain current memory policy. 00:04:33.468 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.468 EAL: Restoring previous memory policy: 4 00:04:33.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.468 EAL: request: mp_malloc_sync 00:04:33.468 EAL: No shared files mode enabled, IPC is disabled 00:04:33.468 EAL: Heap on socket 0 was expanded by 6MB 00:04:33.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.468 EAL: request: mp_malloc_sync 00:04:33.468 EAL: No shared files mode enabled, IPC is disabled 00:04:33.468 EAL: Heap on socket 0 was shrunk by 6MB 00:04:33.468 EAL: Trying to obtain current memory policy. 00:04:33.468 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.468 EAL: Restoring previous memory policy: 4 00:04:33.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.468 EAL: request: mp_malloc_sync 00:04:33.468 EAL: No shared files mode enabled, IPC is disabled 00:04:33.468 EAL: Heap on socket 0 was expanded by 10MB 00:04:33.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.727 EAL: request: mp_malloc_sync 00:04:33.727 EAL: No shared files mode enabled, IPC is disabled 00:04:33.727 EAL: Heap on socket 0 was shrunk by 10MB 00:04:33.727 EAL: Trying to obtain current memory policy. 00:04:33.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.727 EAL: Restoring previous memory policy: 4 00:04:33.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.727 EAL: request: mp_malloc_sync 00:04:33.727 EAL: No shared files mode enabled, IPC is disabled 00:04:33.727 EAL: Heap on socket 0 was expanded by 18MB 00:04:33.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.727 EAL: request: mp_malloc_sync 00:04:33.727 EAL: No shared files mode enabled, IPC is disabled 00:04:33.727 EAL: Heap on socket 0 was shrunk by 18MB 00:04:33.727 EAL: Trying to obtain current memory policy. 00:04:33.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.727 EAL: Restoring previous memory policy: 4 00:04:33.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.727 EAL: request: mp_malloc_sync 00:04:33.727 EAL: No shared files mode enabled, IPC is disabled 00:04:33.727 EAL: Heap on socket 0 was expanded by 34MB 00:04:33.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.727 EAL: request: mp_malloc_sync 00:04:33.727 EAL: No shared files mode enabled, IPC is disabled 00:04:33.727 EAL: Heap on socket 0 was shrunk by 34MB 00:04:33.727 EAL: Trying to obtain current memory policy. 00:04:33.727 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.727 EAL: Restoring previous memory policy: 4 00:04:33.727 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.727 EAL: request: mp_malloc_sync 00:04:33.727 EAL: No shared files mode enabled, IPC is disabled 00:04:33.727 EAL: Heap on socket 0 was expanded by 66MB 00:04:33.986 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.986 EAL: request: mp_malloc_sync 00:04:33.986 EAL: No shared files mode enabled, IPC is disabled 00:04:33.986 EAL: Heap on socket 0 was shrunk by 66MB 00:04:33.986 EAL: Trying to obtain current memory policy. 00:04:33.986 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.245 EAL: Restoring previous memory policy: 4 00:04:34.245 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.245 EAL: request: mp_malloc_sync 00:04:34.245 EAL: No shared files mode enabled, IPC is disabled 00:04:34.245 EAL: Heap on socket 0 was expanded by 130MB 00:04:34.504 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.504 EAL: request: mp_malloc_sync 00:04:34.504 EAL: No shared files mode enabled, IPC is disabled 00:04:34.504 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.504 EAL: Trying to obtain current memory policy. 00:04:34.504 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.763 EAL: Restoring previous memory policy: 4 00:04:34.763 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.763 EAL: request: mp_malloc_sync 00:04:34.763 EAL: No shared files mode enabled, IPC is disabled 00:04:34.763 EAL: Heap on socket 0 was expanded by 258MB 00:04:35.330 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.330 EAL: request: mp_malloc_sync 00:04:35.330 EAL: No shared files mode enabled, IPC is disabled 00:04:35.330 EAL: Heap on socket 0 was shrunk by 258MB 00:04:35.589 EAL: Trying to obtain current memory policy. 00:04:35.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.848 EAL: Restoring previous memory policy: 4 00:04:35.848 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.848 EAL: request: mp_malloc_sync 00:04:35.848 EAL: No shared files mode enabled, IPC is disabled 00:04:35.848 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.842 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.120 EAL: request: mp_malloc_sync 00:04:37.120 EAL: No shared files mode enabled, IPC is disabled 00:04:37.120 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.687 EAL: Trying to obtain current memory policy. 00:04:37.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.255 EAL: Restoring previous memory policy: 4 00:04:38.255 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.255 EAL: request: mp_malloc_sync 00:04:38.255 EAL: No shared files mode enabled, IPC is disabled 00:04:38.255 EAL: Heap on socket 0 was expanded by 1026MB 00:04:40.162 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.422 EAL: request: mp_malloc_sync 00:04:40.422 EAL: No shared files mode enabled, IPC is disabled 00:04:40.422 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:42.330 passed 00:04:42.330 00:04:42.330 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.330 suites 1 1 n/a 0 0 00:04:42.330 tests 2 2 2 0 0 00:04:42.330 asserts 5789 5789 5789 0 n/a 00:04:42.330 00:04:42.330 Elapsed time = 8.873 seconds 00:04:42.330 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.330 EAL: request: mp_malloc_sync 00:04:42.330 EAL: No shared files mode enabled, IPC is disabled 00:04:42.330 EAL: Heap on socket 0 was shrunk by 2MB 00:04:42.330 EAL: No shared files mode enabled, IPC is disabled 00:04:42.330 EAL: No shared files mode enabled, IPC is disabled 00:04:42.330 EAL: No shared files mode enabled, IPC is disabled 00:04:42.330 00:04:42.330 real 0m9.182s 00:04:42.330 user 0m7.799s 00:04:42.330 sys 0m1.231s 00:04:42.330 09:49:18 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.330 09:49:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:42.330 ************************************ 00:04:42.330 END TEST env_vtophys 00:04:42.330 ************************************ 00:04:42.330 09:49:18 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:42.330 09:49:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.330 09:49:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.330 09:49:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.330 ************************************ 00:04:42.330 START TEST env_pci 00:04:42.330 ************************************ 00:04:42.330 09:49:18 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:42.330 00:04:42.330 00:04:42.330 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.330 http://cunit.sourceforge.net/ 00:04:42.330 00:04:42.330 00:04:42.330 Suite: pci 00:04:42.330 Test: pci_hook ...[2024-10-21 09:49:18.648768] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56305 has claimed it 00:04:42.330 passed 00:04:42.330 00:04:42.330 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.330 suites 1 1 n/a 0 0 00:04:42.330 tests 1 1 1 0 0 00:04:42.330 asserts 25 25 25 0 n/a 00:04:42.330 00:04:42.330 Elapsed time = 0.008 seconds 00:04:42.330 EAL: Cannot find device (10000:00:01.0) 00:04:42.330 EAL: Failed to attach device on primary process 00:04:42.330 00:04:42.330 real 0m0.105s 00:04:42.330 user 0m0.035s 00:04:42.330 sys 0m0.070s 00:04:42.330 09:49:18 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.330 09:49:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:42.330 ************************************ 00:04:42.330 END TEST env_pci 00:04:42.330 ************************************ 00:04:42.330 09:49:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:42.331 09:49:18 env -- env/env.sh@15 -- # uname 00:04:42.331 09:49:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:42.331 09:49:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:42.331 09:49:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.331 09:49:18 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:42.331 09:49:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.331 09:49:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.331 ************************************ 00:04:42.331 START TEST env_dpdk_post_init 00:04:42.331 ************************************ 00:04:42.331 09:49:18 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:42.331 EAL: Detected CPU lcores: 10 00:04:42.331 EAL: Detected NUMA nodes: 1 00:04:42.331 EAL: Detected shared linkage of DPDK 00:04:42.331 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.331 EAL: Selected IOVA mode 'PA' 00:04:42.590 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.590 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:42.590 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:42.590 Starting DPDK initialization... 00:04:42.590 Starting SPDK post initialization... 00:04:42.590 SPDK NVMe probe 00:04:42.590 Attaching to 0000:00:10.0 00:04:42.590 Attaching to 0000:00:11.0 00:04:42.590 Attached to 0000:00:10.0 00:04:42.590 Attached to 0000:00:11.0 00:04:42.590 Cleaning up... 00:04:42.590 00:04:42.590 real 0m0.263s 00:04:42.590 user 0m0.076s 00:04:42.590 sys 0m0.089s 00:04:42.590 09:49:19 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.590 09:49:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.590 ************************************ 00:04:42.590 END TEST env_dpdk_post_init 00:04:42.590 ************************************ 00:04:42.590 09:49:19 env -- env/env.sh@26 -- # uname 00:04:42.590 09:49:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:42.590 09:49:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.590 09:49:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.590 09:49:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.590 09:49:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.590 ************************************ 00:04:42.590 START TEST env_mem_callbacks 00:04:42.590 ************************************ 00:04:42.590 09:49:19 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.590 EAL: Detected CPU lcores: 10 00:04:42.591 EAL: Detected NUMA nodes: 1 00:04:42.591 EAL: Detected shared linkage of DPDK 00:04:42.850 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.850 EAL: Selected IOVA mode 'PA' 00:04:42.850 00:04:42.850 00:04:42.850 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.850 http://cunit.sourceforge.net/ 00:04:42.850 00:04:42.850 00:04:42.850 Suite: memoryTELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.850 00:04:42.850 Test: test ... 00:04:42.850 register 0x200000200000 2097152 00:04:42.850 malloc 3145728 00:04:42.850 register 0x200000400000 4194304 00:04:42.850 buf 0x2000004fffc0 len 3145728 PASSED 00:04:42.850 malloc 64 00:04:42.850 buf 0x2000004ffec0 len 64 PASSED 00:04:42.850 malloc 4194304 00:04:42.850 register 0x200000800000 6291456 00:04:42.850 buf 0x2000009fffc0 len 4194304 PASSED 00:04:42.850 free 0x2000004fffc0 3145728 00:04:42.850 free 0x2000004ffec0 64 00:04:42.850 unregister 0x200000400000 4194304 PASSED 00:04:42.850 free 0x2000009fffc0 4194304 00:04:42.850 unregister 0x200000800000 6291456 PASSED 00:04:42.850 malloc 8388608 00:04:42.850 register 0x200000400000 10485760 00:04:42.850 buf 0x2000005fffc0 len 8388608 PASSED 00:04:42.850 free 0x2000005fffc0 8388608 00:04:42.850 unregister 0x200000400000 10485760 PASSED 00:04:42.850 passed 00:04:42.850 00:04:42.850 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.850 suites 1 1 n/a 0 0 00:04:42.850 tests 1 1 1 0 0 00:04:42.850 asserts 15 15 15 0 n/a 00:04:42.850 00:04:42.850 Elapsed time = 0.085 seconds 00:04:42.850 00:04:42.850 real 0m0.282s 00:04:42.850 user 0m0.112s 00:04:42.850 sys 0m0.068s 00:04:42.850 09:49:19 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.850 09:49:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.850 ************************************ 00:04:42.850 END TEST env_mem_callbacks 00:04:42.850 ************************************ 00:04:43.109 ************************************ 00:04:43.109 END TEST env 00:04:43.109 ************************************ 00:04:43.109 00:04:43.109 real 0m10.663s 00:04:43.109 user 0m8.491s 00:04:43.109 sys 0m1.822s 00:04:43.109 09:49:19 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.109 09:49:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.109 09:49:19 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:43.109 09:49:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.109 09:49:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.109 09:49:19 -- common/autotest_common.sh@10 -- # set +x 00:04:43.109 ************************************ 00:04:43.109 START TEST rpc 00:04:43.109 ************************************ 00:04:43.109 09:49:19 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:43.109 * Looking for test storage... 00:04:43.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.109 09:49:19 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:43.109 09:49:19 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:43.109 09:49:19 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:43.369 09:49:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.369 09:49:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.369 09:49:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.369 09:49:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.369 09:49:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.369 09:49:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.369 09:49:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.369 09:49:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.369 09:49:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.369 09:49:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.369 09:49:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.369 09:49:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.369 09:49:19 rpc -- scripts/common.sh@345 -- # : 1 00:04:43.369 09:49:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.369 09:49:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.369 09:49:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.369 09:49:19 rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.369 09:49:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.369 09:49:19 rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.369 09:49:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.369 09:49:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.369 09:49:19 rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.369 09:49:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.369 09:49:19 rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.369 09:49:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.369 09:49:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.369 09:49:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.369 09:49:19 rpc -- scripts/common.sh@368 -- # return 0 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:43.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.369 --rc genhtml_branch_coverage=1 00:04:43.369 --rc genhtml_function_coverage=1 00:04:43.369 --rc genhtml_legend=1 00:04:43.369 --rc geninfo_all_blocks=1 00:04:43.369 --rc geninfo_unexecuted_blocks=1 00:04:43.369 00:04:43.369 ' 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:43.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.369 --rc genhtml_branch_coverage=1 00:04:43.369 --rc genhtml_function_coverage=1 00:04:43.369 --rc genhtml_legend=1 00:04:43.369 --rc geninfo_all_blocks=1 00:04:43.369 --rc geninfo_unexecuted_blocks=1 00:04:43.369 00:04:43.369 ' 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:43.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.369 --rc genhtml_branch_coverage=1 00:04:43.369 --rc genhtml_function_coverage=1 00:04:43.369 --rc genhtml_legend=1 00:04:43.369 --rc geninfo_all_blocks=1 00:04:43.369 --rc geninfo_unexecuted_blocks=1 00:04:43.369 00:04:43.369 ' 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:43.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.369 --rc genhtml_branch_coverage=1 00:04:43.369 --rc genhtml_function_coverage=1 00:04:43.369 --rc genhtml_legend=1 00:04:43.369 --rc geninfo_all_blocks=1 00:04:43.369 --rc geninfo_unexecuted_blocks=1 00:04:43.369 00:04:43.369 ' 00:04:43.369 09:49:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56432 00:04:43.369 09:49:19 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:43.369 09:49:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.369 09:49:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56432 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@831 -- # '[' -z 56432 ']' 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.369 09:49:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.369 [2024-10-21 09:49:19.849397] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:43.369 [2024-10-21 09:49:19.849530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56432 ] 00:04:43.628 [2024-10-21 09:49:20.013784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.628 [2024-10-21 09:49:20.159725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:43.628 [2024-10-21 09:49:20.159788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56432' to capture a snapshot of events at runtime. 00:04:43.628 [2024-10-21 09:49:20.159799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:43.628 [2024-10-21 09:49:20.159810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:43.628 [2024-10-21 09:49:20.159818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56432 for offline analysis/debug. 00:04:43.628 [2024-10-21 09:49:20.161121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.008 09:49:21 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.008 09:49:21 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:45.008 09:49:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.008 09:49:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.008 09:49:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:45.008 09:49:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:45.008 09:49:21 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.008 09:49:21 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.008 09:49:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.008 ************************************ 00:04:45.008 START TEST rpc_integrity 00:04:45.008 ************************************ 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.008 { 00:04:45.008 "name": "Malloc0", 00:04:45.008 "aliases": [ 00:04:45.008 "fe9758c9-bd29-4eee-a2b0-8a5496ba5c3b" 00:04:45.008 ], 00:04:45.008 "product_name": "Malloc disk", 00:04:45.008 "block_size": 512, 00:04:45.008 "num_blocks": 16384, 00:04:45.008 "uuid": "fe9758c9-bd29-4eee-a2b0-8a5496ba5c3b", 00:04:45.008 "assigned_rate_limits": { 00:04:45.008 "rw_ios_per_sec": 0, 00:04:45.008 "rw_mbytes_per_sec": 0, 00:04:45.008 "r_mbytes_per_sec": 0, 00:04:45.008 "w_mbytes_per_sec": 0 00:04:45.008 }, 00:04:45.008 "claimed": false, 00:04:45.008 "zoned": false, 00:04:45.008 "supported_io_types": { 00:04:45.008 "read": true, 00:04:45.008 "write": true, 00:04:45.008 "unmap": true, 00:04:45.008 "flush": true, 00:04:45.008 "reset": true, 00:04:45.008 "nvme_admin": false, 00:04:45.008 "nvme_io": false, 00:04:45.008 "nvme_io_md": false, 00:04:45.008 "write_zeroes": true, 00:04:45.008 "zcopy": true, 00:04:45.008 "get_zone_info": false, 00:04:45.008 "zone_management": false, 00:04:45.008 "zone_append": false, 00:04:45.008 "compare": false, 00:04:45.008 "compare_and_write": false, 00:04:45.008 "abort": true, 00:04:45.008 "seek_hole": false, 00:04:45.008 "seek_data": false, 00:04:45.008 "copy": true, 00:04:45.008 "nvme_iov_md": false 00:04:45.008 }, 00:04:45.008 "memory_domains": [ 00:04:45.008 { 00:04:45.008 "dma_device_id": "system", 00:04:45.008 "dma_device_type": 1 00:04:45.008 }, 00:04:45.008 { 00:04:45.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.008 "dma_device_type": 2 00:04:45.008 } 00:04:45.008 ], 00:04:45.008 "driver_specific": {} 00:04:45.008 } 00:04:45.008 ]' 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.008 [2024-10-21 09:49:21.353619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:45.008 [2024-10-21 09:49:21.353691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.008 [2024-10-21 09:49:21.353713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:04:45.008 [2024-10-21 09:49:21.353726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.008 [2024-10-21 09:49:21.356071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.008 [2024-10-21 09:49:21.356108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.008 Passthru0 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.008 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.008 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.008 { 00:04:45.008 "name": "Malloc0", 00:04:45.008 "aliases": [ 00:04:45.008 "fe9758c9-bd29-4eee-a2b0-8a5496ba5c3b" 00:04:45.008 ], 00:04:45.008 "product_name": "Malloc disk", 00:04:45.008 "block_size": 512, 00:04:45.008 "num_blocks": 16384, 00:04:45.008 "uuid": "fe9758c9-bd29-4eee-a2b0-8a5496ba5c3b", 00:04:45.008 "assigned_rate_limits": { 00:04:45.008 "rw_ios_per_sec": 0, 00:04:45.008 "rw_mbytes_per_sec": 0, 00:04:45.008 "r_mbytes_per_sec": 0, 00:04:45.008 "w_mbytes_per_sec": 0 00:04:45.008 }, 00:04:45.008 "claimed": true, 00:04:45.008 "claim_type": "exclusive_write", 00:04:45.008 "zoned": false, 00:04:45.008 "supported_io_types": { 00:04:45.008 "read": true, 00:04:45.008 "write": true, 00:04:45.008 "unmap": true, 00:04:45.008 "flush": true, 00:04:45.008 "reset": true, 00:04:45.008 "nvme_admin": false, 00:04:45.008 "nvme_io": false, 00:04:45.008 "nvme_io_md": false, 00:04:45.008 "write_zeroes": true, 00:04:45.008 "zcopy": true, 00:04:45.009 "get_zone_info": false, 00:04:45.009 "zone_management": false, 00:04:45.009 "zone_append": false, 00:04:45.009 "compare": false, 00:04:45.009 "compare_and_write": false, 00:04:45.009 "abort": true, 00:04:45.009 "seek_hole": false, 00:04:45.009 "seek_data": false, 00:04:45.009 "copy": true, 00:04:45.009 "nvme_iov_md": false 00:04:45.009 }, 00:04:45.009 "memory_domains": [ 00:04:45.009 { 00:04:45.009 "dma_device_id": "system", 00:04:45.009 "dma_device_type": 1 00:04:45.009 }, 00:04:45.009 { 00:04:45.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.009 "dma_device_type": 2 00:04:45.009 } 00:04:45.009 ], 00:04:45.009 "driver_specific": {} 00:04:45.009 }, 00:04:45.009 { 00:04:45.009 "name": "Passthru0", 00:04:45.009 "aliases": [ 00:04:45.009 "d2e90200-2f97-561c-afcb-47594197e27c" 00:04:45.009 ], 00:04:45.009 "product_name": "passthru", 00:04:45.009 "block_size": 512, 00:04:45.009 "num_blocks": 16384, 00:04:45.009 "uuid": "d2e90200-2f97-561c-afcb-47594197e27c", 00:04:45.009 "assigned_rate_limits": { 00:04:45.009 "rw_ios_per_sec": 0, 00:04:45.009 "rw_mbytes_per_sec": 0, 00:04:45.009 "r_mbytes_per_sec": 0, 00:04:45.009 "w_mbytes_per_sec": 0 00:04:45.009 }, 00:04:45.009 "claimed": false, 00:04:45.009 "zoned": false, 00:04:45.009 "supported_io_types": { 00:04:45.009 "read": true, 00:04:45.009 "write": true, 00:04:45.009 "unmap": true, 00:04:45.009 "flush": true, 00:04:45.009 "reset": true, 00:04:45.009 "nvme_admin": false, 00:04:45.009 "nvme_io": false, 00:04:45.009 "nvme_io_md": false, 00:04:45.009 "write_zeroes": true, 00:04:45.009 "zcopy": true, 00:04:45.009 "get_zone_info": false, 00:04:45.009 "zone_management": false, 00:04:45.009 "zone_append": false, 00:04:45.009 "compare": false, 00:04:45.009 "compare_and_write": false, 00:04:45.009 "abort": true, 00:04:45.009 "seek_hole": false, 00:04:45.009 "seek_data": false, 00:04:45.009 "copy": true, 00:04:45.009 "nvme_iov_md": false 00:04:45.009 }, 00:04:45.009 "memory_domains": [ 00:04:45.009 { 00:04:45.009 "dma_device_id": "system", 00:04:45.009 "dma_device_type": 1 00:04:45.009 }, 00:04:45.009 { 00:04:45.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.009 "dma_device_type": 2 00:04:45.009 } 00:04:45.009 ], 00:04:45.009 "driver_specific": { 00:04:45.009 "passthru": { 00:04:45.009 "name": "Passthru0", 00:04:45.009 "base_bdev_name": "Malloc0" 00:04:45.009 } 00:04:45.009 } 00:04:45.009 } 00:04:45.009 ]' 00:04:45.009 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.009 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.009 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.009 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.009 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.009 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:45.009 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:45.009 ************************************ 00:04:45.009 END TEST rpc_integrity 00:04:45.009 ************************************ 00:04:45.009 09:49:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:45.009 00:04:45.009 real 0m0.344s 00:04:45.009 user 0m0.181s 00:04:45.009 sys 0m0.055s 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.009 09:49:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.009 09:49:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:45.009 09:49:21 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.009 09:49:21 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.009 09:49:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.009 ************************************ 00:04:45.009 START TEST rpc_plugins 00:04:45.009 ************************************ 00:04:45.009 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:45.009 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:45.009 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.009 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.269 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:45.269 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.269 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:45.269 { 00:04:45.269 "name": "Malloc1", 00:04:45.269 "aliases": [ 00:04:45.269 "42b30050-42fe-4c20-b2a2-e8570de7ef9c" 00:04:45.269 ], 00:04:45.269 "product_name": "Malloc disk", 00:04:45.269 "block_size": 4096, 00:04:45.269 "num_blocks": 256, 00:04:45.269 "uuid": "42b30050-42fe-4c20-b2a2-e8570de7ef9c", 00:04:45.269 "assigned_rate_limits": { 00:04:45.269 "rw_ios_per_sec": 0, 00:04:45.269 "rw_mbytes_per_sec": 0, 00:04:45.269 "r_mbytes_per_sec": 0, 00:04:45.269 "w_mbytes_per_sec": 0 00:04:45.269 }, 00:04:45.269 "claimed": false, 00:04:45.269 "zoned": false, 00:04:45.269 "supported_io_types": { 00:04:45.269 "read": true, 00:04:45.269 "write": true, 00:04:45.269 "unmap": true, 00:04:45.269 "flush": true, 00:04:45.269 "reset": true, 00:04:45.269 "nvme_admin": false, 00:04:45.269 "nvme_io": false, 00:04:45.269 "nvme_io_md": false, 00:04:45.269 "write_zeroes": true, 00:04:45.269 "zcopy": true, 00:04:45.269 "get_zone_info": false, 00:04:45.269 "zone_management": false, 00:04:45.269 "zone_append": false, 00:04:45.269 "compare": false, 00:04:45.269 "compare_and_write": false, 00:04:45.269 "abort": true, 00:04:45.269 "seek_hole": false, 00:04:45.269 "seek_data": false, 00:04:45.269 "copy": true, 00:04:45.269 "nvme_iov_md": false 00:04:45.269 }, 00:04:45.269 "memory_domains": [ 00:04:45.269 { 00:04:45.269 "dma_device_id": "system", 00:04:45.269 "dma_device_type": 1 00:04:45.269 }, 00:04:45.269 { 00:04:45.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.269 "dma_device_type": 2 00:04:45.269 } 00:04:45.269 ], 00:04:45.269 "driver_specific": {} 00:04:45.269 } 00:04:45.269 ]' 00:04:45.269 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:45.269 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:45.269 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.269 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.269 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:45.269 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:45.269 09:49:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:45.269 00:04:45.269 real 0m0.149s 00:04:45.269 user 0m0.083s 00:04:45.269 sys 0m0.027s 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.269 09:49:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.269 ************************************ 00:04:45.269 END TEST rpc_plugins 00:04:45.269 ************************************ 00:04:45.269 09:49:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:45.269 09:49:21 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.269 09:49:21 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.269 09:49:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.269 ************************************ 00:04:45.269 START TEST rpc_trace_cmd_test 00:04:45.269 ************************************ 00:04:45.269 09:49:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:45.269 09:49:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:45.269 09:49:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:45.269 09:49:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.269 09:49:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.269 09:49:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.270 09:49:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:45.270 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56432", 00:04:45.270 "tpoint_group_mask": "0x8", 00:04:45.270 "iscsi_conn": { 00:04:45.270 "mask": "0x2", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "scsi": { 00:04:45.270 "mask": "0x4", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "bdev": { 00:04:45.270 "mask": "0x8", 00:04:45.270 "tpoint_mask": "0xffffffffffffffff" 00:04:45.270 }, 00:04:45.270 "nvmf_rdma": { 00:04:45.270 "mask": "0x10", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "nvmf_tcp": { 00:04:45.270 "mask": "0x20", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "ftl": { 00:04:45.270 "mask": "0x40", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "blobfs": { 00:04:45.270 "mask": "0x80", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "dsa": { 00:04:45.270 "mask": "0x200", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "thread": { 00:04:45.270 "mask": "0x400", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "nvme_pcie": { 00:04:45.270 "mask": "0x800", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "iaa": { 00:04:45.270 "mask": "0x1000", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "nvme_tcp": { 00:04:45.270 "mask": "0x2000", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "bdev_nvme": { 00:04:45.270 "mask": "0x4000", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "sock": { 00:04:45.270 "mask": "0x8000", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "blob": { 00:04:45.270 "mask": "0x10000", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "bdev_raid": { 00:04:45.270 "mask": "0x20000", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 }, 00:04:45.270 "scheduler": { 00:04:45.270 "mask": "0x40000", 00:04:45.270 "tpoint_mask": "0x0" 00:04:45.270 } 00:04:45.270 }' 00:04:45.270 09:49:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:45.529 09:49:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:45.529 09:49:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:45.529 09:49:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:45.529 09:49:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:45.529 09:49:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:45.529 09:49:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:45.529 09:49:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:45.529 09:49:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:45.529 09:49:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:45.529 00:04:45.529 real 0m0.276s 00:04:45.529 user 0m0.229s 00:04:45.529 sys 0m0.034s 00:04:45.530 09:49:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.530 09:49:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.530 ************************************ 00:04:45.530 END TEST rpc_trace_cmd_test 00:04:45.530 ************************************ 00:04:45.530 09:49:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:45.530 09:49:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:45.530 09:49:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:45.530 09:49:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.530 09:49:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.530 09:49:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.788 ************************************ 00:04:45.788 START TEST rpc_daemon_integrity 00:04:45.788 ************************************ 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.788 { 00:04:45.788 "name": "Malloc2", 00:04:45.788 "aliases": [ 00:04:45.788 "3cb8735a-8513-4ace-8c0e-359581474ea7" 00:04:45.788 ], 00:04:45.788 "product_name": "Malloc disk", 00:04:45.788 "block_size": 512, 00:04:45.788 "num_blocks": 16384, 00:04:45.788 "uuid": "3cb8735a-8513-4ace-8c0e-359581474ea7", 00:04:45.788 "assigned_rate_limits": { 00:04:45.788 "rw_ios_per_sec": 0, 00:04:45.788 "rw_mbytes_per_sec": 0, 00:04:45.788 "r_mbytes_per_sec": 0, 00:04:45.788 "w_mbytes_per_sec": 0 00:04:45.788 }, 00:04:45.788 "claimed": false, 00:04:45.788 "zoned": false, 00:04:45.788 "supported_io_types": { 00:04:45.788 "read": true, 00:04:45.788 "write": true, 00:04:45.788 "unmap": true, 00:04:45.788 "flush": true, 00:04:45.788 "reset": true, 00:04:45.788 "nvme_admin": false, 00:04:45.788 "nvme_io": false, 00:04:45.788 "nvme_io_md": false, 00:04:45.788 "write_zeroes": true, 00:04:45.788 "zcopy": true, 00:04:45.788 "get_zone_info": false, 00:04:45.788 "zone_management": false, 00:04:45.788 "zone_append": false, 00:04:45.788 "compare": false, 00:04:45.788 "compare_and_write": false, 00:04:45.788 "abort": true, 00:04:45.788 "seek_hole": false, 00:04:45.788 "seek_data": false, 00:04:45.788 "copy": true, 00:04:45.788 "nvme_iov_md": false 00:04:45.788 }, 00:04:45.788 "memory_domains": [ 00:04:45.788 { 00:04:45.788 "dma_device_id": "system", 00:04:45.788 "dma_device_type": 1 00:04:45.788 }, 00:04:45.788 { 00:04:45.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.788 "dma_device_type": 2 00:04:45.788 } 00:04:45.788 ], 00:04:45.788 "driver_specific": {} 00:04:45.788 } 00:04:45.788 ]' 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.788 [2024-10-21 09:49:22.295412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:45.788 [2024-10-21 09:49:22.295487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.788 [2024-10-21 09:49:22.295512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:04:45.788 [2024-10-21 09:49:22.295524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.788 [2024-10-21 09:49:22.298020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.788 [2024-10-21 09:49:22.298059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.788 Passthru0 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.788 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.788 { 00:04:45.788 "name": "Malloc2", 00:04:45.788 "aliases": [ 00:04:45.788 "3cb8735a-8513-4ace-8c0e-359581474ea7" 00:04:45.788 ], 00:04:45.788 "product_name": "Malloc disk", 00:04:45.788 "block_size": 512, 00:04:45.788 "num_blocks": 16384, 00:04:45.788 "uuid": "3cb8735a-8513-4ace-8c0e-359581474ea7", 00:04:45.788 "assigned_rate_limits": { 00:04:45.788 "rw_ios_per_sec": 0, 00:04:45.788 "rw_mbytes_per_sec": 0, 00:04:45.788 "r_mbytes_per_sec": 0, 00:04:45.788 "w_mbytes_per_sec": 0 00:04:45.788 }, 00:04:45.788 "claimed": true, 00:04:45.788 "claim_type": "exclusive_write", 00:04:45.788 "zoned": false, 00:04:45.788 "supported_io_types": { 00:04:45.788 "read": true, 00:04:45.788 "write": true, 00:04:45.788 "unmap": true, 00:04:45.788 "flush": true, 00:04:45.788 "reset": true, 00:04:45.788 "nvme_admin": false, 00:04:45.788 "nvme_io": false, 00:04:45.788 "nvme_io_md": false, 00:04:45.788 "write_zeroes": true, 00:04:45.788 "zcopy": true, 00:04:45.788 "get_zone_info": false, 00:04:45.788 "zone_management": false, 00:04:45.788 "zone_append": false, 00:04:45.788 "compare": false, 00:04:45.788 "compare_and_write": false, 00:04:45.788 "abort": true, 00:04:45.788 "seek_hole": false, 00:04:45.788 "seek_data": false, 00:04:45.788 "copy": true, 00:04:45.788 "nvme_iov_md": false 00:04:45.788 }, 00:04:45.788 "memory_domains": [ 00:04:45.788 { 00:04:45.788 "dma_device_id": "system", 00:04:45.788 "dma_device_type": 1 00:04:45.788 }, 00:04:45.788 { 00:04:45.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.788 "dma_device_type": 2 00:04:45.788 } 00:04:45.788 ], 00:04:45.788 "driver_specific": {} 00:04:45.788 }, 00:04:45.788 { 00:04:45.788 "name": "Passthru0", 00:04:45.788 "aliases": [ 00:04:45.788 "0f6d0cb8-1913-541f-8401-57d21e69febb" 00:04:45.788 ], 00:04:45.788 "product_name": "passthru", 00:04:45.788 "block_size": 512, 00:04:45.788 "num_blocks": 16384, 00:04:45.788 "uuid": "0f6d0cb8-1913-541f-8401-57d21e69febb", 00:04:45.788 "assigned_rate_limits": { 00:04:45.788 "rw_ios_per_sec": 0, 00:04:45.788 "rw_mbytes_per_sec": 0, 00:04:45.788 "r_mbytes_per_sec": 0, 00:04:45.788 "w_mbytes_per_sec": 0 00:04:45.788 }, 00:04:45.788 "claimed": false, 00:04:45.788 "zoned": false, 00:04:45.788 "supported_io_types": { 00:04:45.788 "read": true, 00:04:45.788 "write": true, 00:04:45.788 "unmap": true, 00:04:45.788 "flush": true, 00:04:45.788 "reset": true, 00:04:45.788 "nvme_admin": false, 00:04:45.788 "nvme_io": false, 00:04:45.788 "nvme_io_md": false, 00:04:45.788 "write_zeroes": true, 00:04:45.788 "zcopy": true, 00:04:45.788 "get_zone_info": false, 00:04:45.788 "zone_management": false, 00:04:45.788 "zone_append": false, 00:04:45.788 "compare": false, 00:04:45.788 "compare_and_write": false, 00:04:45.788 "abort": true, 00:04:45.788 "seek_hole": false, 00:04:45.788 "seek_data": false, 00:04:45.788 "copy": true, 00:04:45.788 "nvme_iov_md": false 00:04:45.788 }, 00:04:45.788 "memory_domains": [ 00:04:45.788 { 00:04:45.788 "dma_device_id": "system", 00:04:45.788 "dma_device_type": 1 00:04:45.788 }, 00:04:45.788 { 00:04:45.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.788 "dma_device_type": 2 00:04:45.788 } 00:04:45.788 ], 00:04:45.789 "driver_specific": { 00:04:45.789 "passthru": { 00:04:45.789 "name": "Passthru0", 00:04:45.789 "base_bdev_name": "Malloc2" 00:04:45.789 } 00:04:45.789 } 00:04:45.789 } 00:04:45.789 ]' 00:04:45.789 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.789 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.789 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.789 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.789 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.048 00:04:46.048 real 0m0.351s 00:04:46.048 user 0m0.194s 00:04:46.048 sys 0m0.049s 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.048 09:49:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.048 ************************************ 00:04:46.048 END TEST rpc_daemon_integrity 00:04:46.048 ************************************ 00:04:46.048 09:49:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.048 09:49:22 rpc -- rpc/rpc.sh@84 -- # killprocess 56432 00:04:46.048 09:49:22 rpc -- common/autotest_common.sh@950 -- # '[' -z 56432 ']' 00:04:46.048 09:49:22 rpc -- common/autotest_common.sh@954 -- # kill -0 56432 00:04:46.048 09:49:22 rpc -- common/autotest_common.sh@955 -- # uname 00:04:46.048 09:49:22 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.048 09:49:22 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56432 00:04:46.048 09:49:22 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.048 09:49:22 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.048 killing process with pid 56432 00:04:46.048 09:49:22 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56432' 00:04:46.048 09:49:22 rpc -- common/autotest_common.sh@969 -- # kill 56432 00:04:46.048 09:49:22 rpc -- common/autotest_common.sh@974 -- # wait 56432 00:04:48.585 00:04:48.585 real 0m5.622s 00:04:48.585 user 0m6.019s 00:04:48.585 sys 0m0.999s 00:04:48.585 09:49:25 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.585 09:49:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.585 ************************************ 00:04:48.585 END TEST rpc 00:04:48.585 ************************************ 00:04:48.844 09:49:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.844 09:49:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.844 09:49:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.844 09:49:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.844 ************************************ 00:04:48.844 START TEST skip_rpc 00:04:48.844 ************************************ 00:04:48.844 09:49:25 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.844 * Looking for test storage... 00:04:48.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.844 09:49:25 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.844 09:49:25 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.844 09:49:25 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:48.844 09:49:25 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.844 09:49:25 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.845 09:49:25 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.845 09:49:25 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.845 09:49:25 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.845 09:49:25 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.845 09:49:25 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.845 09:49:25 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.845 09:49:25 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.845 09:49:25 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.845 09:49:25 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.845 09:49:25 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.845 --rc genhtml_branch_coverage=1 00:04:48.845 --rc genhtml_function_coverage=1 00:04:48.845 --rc genhtml_legend=1 00:04:48.845 --rc geninfo_all_blocks=1 00:04:48.845 --rc geninfo_unexecuted_blocks=1 00:04:48.845 00:04:48.845 ' 00:04:48.845 09:49:25 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.845 --rc genhtml_branch_coverage=1 00:04:48.845 --rc genhtml_function_coverage=1 00:04:48.845 --rc genhtml_legend=1 00:04:48.845 --rc geninfo_all_blocks=1 00:04:48.845 --rc geninfo_unexecuted_blocks=1 00:04:48.845 00:04:48.845 ' 00:04:48.845 09:49:25 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.845 --rc genhtml_branch_coverage=1 00:04:48.845 --rc genhtml_function_coverage=1 00:04:48.845 --rc genhtml_legend=1 00:04:48.845 --rc geninfo_all_blocks=1 00:04:48.845 --rc geninfo_unexecuted_blocks=1 00:04:48.845 00:04:48.845 ' 00:04:48.845 09:49:25 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.845 --rc genhtml_branch_coverage=1 00:04:48.845 --rc genhtml_function_coverage=1 00:04:48.845 --rc genhtml_legend=1 00:04:48.845 --rc geninfo_all_blocks=1 00:04:48.845 --rc geninfo_unexecuted_blocks=1 00:04:48.845 00:04:48.845 ' 00:04:48.845 09:49:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.845 09:49:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.845 09:49:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:48.845 09:49:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.845 09:49:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.845 09:49:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.104 ************************************ 00:04:49.104 START TEST skip_rpc 00:04:49.104 ************************************ 00:04:49.104 09:49:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:49.104 09:49:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:49.104 09:49:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56667 00:04:49.104 09:49:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.104 09:49:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:49.104 [2024-10-21 09:49:25.545719] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:49.104 [2024-10-21 09:49:25.545829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56667 ] 00:04:49.363 [2024-10-21 09:49:25.708950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.363 [2024-10-21 09:49:25.856252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56667 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 56667 ']' 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 56667 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56667 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.639 killing process with pid 56667 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56667' 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 56667 00:04:54.639 09:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 56667 00:04:56.546 00:04:56.546 real 0m7.592s 00:04:56.546 user 0m6.988s 00:04:56.546 sys 0m0.526s 00:04:56.546 09:49:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.546 09:49:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.546 ************************************ 00:04:56.546 END TEST skip_rpc 00:04:56.546 ************************************ 00:04:56.546 09:49:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:56.546 09:49:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.546 09:49:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.546 09:49:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.546 ************************************ 00:04:56.546 START TEST skip_rpc_with_json 00:04:56.546 ************************************ 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56771 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56771 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 56771 ']' 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.546 09:49:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.818 [2024-10-21 09:49:33.199394] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:04:56.818 [2024-10-21 09:49:33.199499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56771 ] 00:04:56.818 [2024-10-21 09:49:33.348471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.093 [2024-10-21 09:49:33.485074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.033 [2024-10-21 09:49:34.538004] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:58.033 request: 00:04:58.033 { 00:04:58.033 "trtype": "tcp", 00:04:58.033 "method": "nvmf_get_transports", 00:04:58.033 "req_id": 1 00:04:58.033 } 00:04:58.033 Got JSON-RPC error response 00:04:58.033 response: 00:04:58.033 { 00:04:58.033 "code": -19, 00:04:58.033 "message": "No such device" 00:04:58.033 } 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.033 [2024-10-21 09:49:34.550127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.033 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.293 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.293 09:49:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:58.293 { 00:04:58.293 "subsystems": [ 00:04:58.293 { 00:04:58.293 "subsystem": "fsdev", 00:04:58.293 "config": [ 00:04:58.293 { 00:04:58.293 "method": "fsdev_set_opts", 00:04:58.293 "params": { 00:04:58.293 "fsdev_io_pool_size": 65535, 00:04:58.293 "fsdev_io_cache_size": 256 00:04:58.293 } 00:04:58.293 } 00:04:58.293 ] 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "subsystem": "keyring", 00:04:58.293 "config": [] 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "subsystem": "iobuf", 00:04:58.293 "config": [ 00:04:58.293 { 00:04:58.293 "method": "iobuf_set_options", 00:04:58.293 "params": { 00:04:58.293 "small_pool_count": 8192, 00:04:58.293 "large_pool_count": 1024, 00:04:58.293 "small_bufsize": 8192, 00:04:58.293 "large_bufsize": 135168 00:04:58.293 } 00:04:58.293 } 00:04:58.293 ] 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "subsystem": "sock", 00:04:58.293 "config": [ 00:04:58.293 { 00:04:58.293 "method": "sock_set_default_impl", 00:04:58.293 "params": { 00:04:58.293 "impl_name": "posix" 00:04:58.293 } 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "method": "sock_impl_set_options", 00:04:58.293 "params": { 00:04:58.293 "impl_name": "ssl", 00:04:58.293 "recv_buf_size": 4096, 00:04:58.293 "send_buf_size": 4096, 00:04:58.293 "enable_recv_pipe": true, 00:04:58.293 "enable_quickack": false, 00:04:58.293 "enable_placement_id": 0, 00:04:58.293 "enable_zerocopy_send_server": true, 00:04:58.293 "enable_zerocopy_send_client": false, 00:04:58.293 "zerocopy_threshold": 0, 00:04:58.293 "tls_version": 0, 00:04:58.293 "enable_ktls": false 00:04:58.293 } 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "method": "sock_impl_set_options", 00:04:58.293 "params": { 00:04:58.293 "impl_name": "posix", 00:04:58.293 "recv_buf_size": 2097152, 00:04:58.293 "send_buf_size": 2097152, 00:04:58.293 "enable_recv_pipe": true, 00:04:58.293 "enable_quickack": false, 00:04:58.293 "enable_placement_id": 0, 00:04:58.293 "enable_zerocopy_send_server": true, 00:04:58.293 "enable_zerocopy_send_client": false, 00:04:58.293 "zerocopy_threshold": 0, 00:04:58.293 "tls_version": 0, 00:04:58.293 "enable_ktls": false 00:04:58.293 } 00:04:58.293 } 00:04:58.293 ] 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "subsystem": "vmd", 00:04:58.293 "config": [] 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "subsystem": "accel", 00:04:58.293 "config": [ 00:04:58.293 { 00:04:58.293 "method": "accel_set_options", 00:04:58.293 "params": { 00:04:58.293 "small_cache_size": 128, 00:04:58.293 "large_cache_size": 16, 00:04:58.293 "task_count": 2048, 00:04:58.293 "sequence_count": 2048, 00:04:58.293 "buf_count": 2048 00:04:58.293 } 00:04:58.293 } 00:04:58.293 ] 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "subsystem": "bdev", 00:04:58.293 "config": [ 00:04:58.293 { 00:04:58.293 "method": "bdev_set_options", 00:04:58.293 "params": { 00:04:58.293 "bdev_io_pool_size": 65535, 00:04:58.293 "bdev_io_cache_size": 256, 00:04:58.293 "bdev_auto_examine": true, 00:04:58.293 "iobuf_small_cache_size": 128, 00:04:58.293 "iobuf_large_cache_size": 16 00:04:58.293 } 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "method": "bdev_raid_set_options", 00:04:58.293 "params": { 00:04:58.293 "process_window_size_kb": 1024, 00:04:58.293 "process_max_bandwidth_mb_sec": 0 00:04:58.293 } 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "method": "bdev_iscsi_set_options", 00:04:58.293 "params": { 00:04:58.293 "timeout_sec": 30 00:04:58.293 } 00:04:58.293 }, 00:04:58.293 { 00:04:58.293 "method": "bdev_nvme_set_options", 00:04:58.293 "params": { 00:04:58.293 "action_on_timeout": "none", 00:04:58.293 "timeout_us": 0, 00:04:58.293 "timeout_admin_us": 0, 00:04:58.293 "keep_alive_timeout_ms": 10000, 00:04:58.293 "arbitration_burst": 0, 00:04:58.293 "low_priority_weight": 0, 00:04:58.294 "medium_priority_weight": 0, 00:04:58.294 "high_priority_weight": 0, 00:04:58.294 "nvme_adminq_poll_period_us": 10000, 00:04:58.294 "nvme_ioq_poll_period_us": 0, 00:04:58.294 "io_queue_requests": 0, 00:04:58.294 "delay_cmd_submit": true, 00:04:58.294 "transport_retry_count": 4, 00:04:58.294 "bdev_retry_count": 3, 00:04:58.294 "transport_ack_timeout": 0, 00:04:58.294 "ctrlr_loss_timeout_sec": 0, 00:04:58.294 "reconnect_delay_sec": 0, 00:04:58.294 "fast_io_fail_timeout_sec": 0, 00:04:58.294 "disable_auto_failback": false, 00:04:58.294 "generate_uuids": false, 00:04:58.294 "transport_tos": 0, 00:04:58.294 "nvme_error_stat": false, 00:04:58.294 "rdma_srq_size": 0, 00:04:58.294 "io_path_stat": false, 00:04:58.294 "allow_accel_sequence": false, 00:04:58.294 "rdma_max_cq_size": 0, 00:04:58.294 "rdma_cm_event_timeout_ms": 0, 00:04:58.294 "dhchap_digests": [ 00:04:58.294 "sha256", 00:04:58.294 "sha384", 00:04:58.294 "sha512" 00:04:58.294 ], 00:04:58.294 "dhchap_dhgroups": [ 00:04:58.294 "null", 00:04:58.294 "ffdhe2048", 00:04:58.294 "ffdhe3072", 00:04:58.294 "ffdhe4096", 00:04:58.294 "ffdhe6144", 00:04:58.294 "ffdhe8192" 00:04:58.294 ] 00:04:58.294 } 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "method": "bdev_nvme_set_hotplug", 00:04:58.294 "params": { 00:04:58.294 "period_us": 100000, 00:04:58.294 "enable": false 00:04:58.294 } 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "method": "bdev_wait_for_examine" 00:04:58.294 } 00:04:58.294 ] 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "subsystem": "scsi", 00:04:58.294 "config": null 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "subsystem": "scheduler", 00:04:58.294 "config": [ 00:04:58.294 { 00:04:58.294 "method": "framework_set_scheduler", 00:04:58.294 "params": { 00:04:58.294 "name": "static" 00:04:58.294 } 00:04:58.294 } 00:04:58.294 ] 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "subsystem": "vhost_scsi", 00:04:58.294 "config": [] 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "subsystem": "vhost_blk", 00:04:58.294 "config": [] 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "subsystem": "ublk", 00:04:58.294 "config": [] 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "subsystem": "nbd", 00:04:58.294 "config": [] 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "subsystem": "nvmf", 00:04:58.294 "config": [ 00:04:58.294 { 00:04:58.294 "method": "nvmf_set_config", 00:04:58.294 "params": { 00:04:58.294 "discovery_filter": "match_any", 00:04:58.294 "admin_cmd_passthru": { 00:04:58.294 "identify_ctrlr": false 00:04:58.294 }, 00:04:58.294 "dhchap_digests": [ 00:04:58.294 "sha256", 00:04:58.294 "sha384", 00:04:58.294 "sha512" 00:04:58.294 ], 00:04:58.294 "dhchap_dhgroups": [ 00:04:58.294 "null", 00:04:58.294 "ffdhe2048", 00:04:58.294 "ffdhe3072", 00:04:58.294 "ffdhe4096", 00:04:58.294 "ffdhe6144", 00:04:58.294 "ffdhe8192" 00:04:58.294 ] 00:04:58.294 } 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "method": "nvmf_set_max_subsystems", 00:04:58.294 "params": { 00:04:58.294 "max_subsystems": 1024 00:04:58.294 } 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "method": "nvmf_set_crdt", 00:04:58.294 "params": { 00:04:58.294 "crdt1": 0, 00:04:58.294 "crdt2": 0, 00:04:58.294 "crdt3": 0 00:04:58.294 } 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "method": "nvmf_create_transport", 00:04:58.294 "params": { 00:04:58.294 "trtype": "TCP", 00:04:58.294 "max_queue_depth": 128, 00:04:58.294 "max_io_qpairs_per_ctrlr": 127, 00:04:58.294 "in_capsule_data_size": 4096, 00:04:58.294 "max_io_size": 131072, 00:04:58.294 "io_unit_size": 131072, 00:04:58.294 "max_aq_depth": 128, 00:04:58.294 "num_shared_buffers": 511, 00:04:58.294 "buf_cache_size": 4294967295, 00:04:58.294 "dif_insert_or_strip": false, 00:04:58.294 "zcopy": false, 00:04:58.294 "c2h_success": true, 00:04:58.294 "sock_priority": 0, 00:04:58.294 "abort_timeout_sec": 1, 00:04:58.294 "ack_timeout": 0, 00:04:58.294 "data_wr_pool_size": 0 00:04:58.294 } 00:04:58.294 } 00:04:58.294 ] 00:04:58.294 }, 00:04:58.294 { 00:04:58.294 "subsystem": "iscsi", 00:04:58.294 "config": [ 00:04:58.294 { 00:04:58.294 "method": "iscsi_set_options", 00:04:58.294 "params": { 00:04:58.294 "node_base": "iqn.2016-06.io.spdk", 00:04:58.294 "max_sessions": 128, 00:04:58.294 "max_connections_per_session": 2, 00:04:58.294 "max_queue_depth": 64, 00:04:58.294 "default_time2wait": 2, 00:04:58.294 "default_time2retain": 20, 00:04:58.294 "first_burst_length": 8192, 00:04:58.294 "immediate_data": true, 00:04:58.294 "allow_duplicated_isid": false, 00:04:58.294 "error_recovery_level": 0, 00:04:58.294 "nop_timeout": 60, 00:04:58.294 "nop_in_interval": 30, 00:04:58.294 "disable_chap": false, 00:04:58.294 "require_chap": false, 00:04:58.294 "mutual_chap": false, 00:04:58.294 "chap_group": 0, 00:04:58.294 "max_large_datain_per_connection": 64, 00:04:58.294 "max_r2t_per_connection": 4, 00:04:58.294 "pdu_pool_size": 36864, 00:04:58.294 "immediate_data_pool_size": 16384, 00:04:58.294 "data_out_pool_size": 2048 00:04:58.294 } 00:04:58.294 } 00:04:58.294 ] 00:04:58.294 } 00:04:58.294 ] 00:04:58.294 } 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56771 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 56771 ']' 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 56771 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56771 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.294 killing process with pid 56771 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56771' 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 56771 00:04:58.294 09:49:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 56771 00:05:00.830 09:49:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56830 00:05:00.830 09:49:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.830 09:49:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56830 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 56830 ']' 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 56830 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56830 00:05:06.111 killing process with pid 56830 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56830' 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 56830 00:05:06.111 09:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 56830 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:08.650 00:05:08.650 real 0m11.797s 00:05:08.650 user 0m10.915s 00:05:08.650 sys 0m1.151s 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.650 ************************************ 00:05:08.650 END TEST skip_rpc_with_json 00:05:08.650 ************************************ 00:05:08.650 09:49:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:08.650 09:49:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.650 09:49:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.650 09:49:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.650 ************************************ 00:05:08.650 START TEST skip_rpc_with_delay 00:05:08.650 ************************************ 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:08.650 09:49:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.650 [2024-10-21 09:49:45.064780] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:08.650 09:49:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:08.650 09:49:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:08.650 09:49:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:08.650 09:49:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:08.650 00:05:08.650 real 0m0.166s 00:05:08.650 user 0m0.089s 00:05:08.650 sys 0m0.076s 00:05:08.650 09:49:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.650 09:49:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:08.650 ************************************ 00:05:08.650 END TEST skip_rpc_with_delay 00:05:08.650 ************************************ 00:05:08.650 09:49:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:08.650 09:49:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:08.650 09:49:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:08.650 09:49:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.650 09:49:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.650 09:49:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.650 ************************************ 00:05:08.650 START TEST exit_on_failed_rpc_init 00:05:08.650 ************************************ 00:05:08.650 09:49:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:08.650 09:49:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56966 00:05:08.650 09:49:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.650 09:49:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56966 00:05:08.650 09:49:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 56966 ']' 00:05:08.650 09:49:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.650 09:49:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.650 09:49:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.651 09:49:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.651 09:49:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.910 [2024-10-21 09:49:45.291022] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:08.910 [2024-10-21 09:49:45.291450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56966 ] 00:05:08.910 [2024-10-21 09:49:45.453539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.169 [2024-10-21 09:49:45.592148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:10.109 09:49:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:10.379 [2024-10-21 09:49:46.741481] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:10.379 [2024-10-21 09:49:46.741640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56990 ] 00:05:10.379 [2024-10-21 09:49:46.903962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.644 [2024-10-21 09:49:47.022287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.644 [2024-10-21 09:49:47.022378] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:10.644 [2024-10-21 09:49:47.022391] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:10.644 [2024-10-21 09:49:47.022402] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56966 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 56966 ']' 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 56966 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56966 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.904 killing process with pid 56966 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56966' 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 56966 00:05:10.904 09:49:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 56966 00:05:13.445 ************************************ 00:05:13.445 END TEST exit_on_failed_rpc_init 00:05:13.445 ************************************ 00:05:13.445 00:05:13.445 real 0m4.700s 00:05:13.445 user 0m4.858s 00:05:13.445 sys 0m0.700s 00:05:13.445 09:49:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.445 09:49:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.445 09:49:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:13.445 00:05:13.445 real 0m24.733s 00:05:13.445 user 0m23.033s 00:05:13.445 sys 0m2.761s 00:05:13.445 09:49:49 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.445 09:49:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.445 ************************************ 00:05:13.445 END TEST skip_rpc 00:05:13.445 ************************************ 00:05:13.445 09:49:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:13.445 09:49:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.445 09:49:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.445 09:49:49 -- common/autotest_common.sh@10 -- # set +x 00:05:13.445 ************************************ 00:05:13.445 START TEST rpc_client 00:05:13.445 ************************************ 00:05:13.445 09:49:50 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:13.704 * Looking for test storage... 00:05:13.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:13.704 09:49:50 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:13.704 09:49:50 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:13.704 09:49:50 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:13.704 09:49:50 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.704 09:49:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:13.704 09:49:50 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.704 09:49:50 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:13.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.704 --rc genhtml_branch_coverage=1 00:05:13.704 --rc genhtml_function_coverage=1 00:05:13.704 --rc genhtml_legend=1 00:05:13.704 --rc geninfo_all_blocks=1 00:05:13.704 --rc geninfo_unexecuted_blocks=1 00:05:13.704 00:05:13.704 ' 00:05:13.704 09:49:50 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:13.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.704 --rc genhtml_branch_coverage=1 00:05:13.704 --rc genhtml_function_coverage=1 00:05:13.704 --rc genhtml_legend=1 00:05:13.704 --rc geninfo_all_blocks=1 00:05:13.704 --rc geninfo_unexecuted_blocks=1 00:05:13.704 00:05:13.704 ' 00:05:13.704 09:49:50 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:13.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.704 --rc genhtml_branch_coverage=1 00:05:13.704 --rc genhtml_function_coverage=1 00:05:13.704 --rc genhtml_legend=1 00:05:13.704 --rc geninfo_all_blocks=1 00:05:13.704 --rc geninfo_unexecuted_blocks=1 00:05:13.704 00:05:13.704 ' 00:05:13.704 09:49:50 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:13.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.705 --rc genhtml_branch_coverage=1 00:05:13.705 --rc genhtml_function_coverage=1 00:05:13.705 --rc genhtml_legend=1 00:05:13.705 --rc geninfo_all_blocks=1 00:05:13.705 --rc geninfo_unexecuted_blocks=1 00:05:13.705 00:05:13.705 ' 00:05:13.705 09:49:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:13.705 OK 00:05:13.964 09:49:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:13.964 00:05:13.964 real 0m0.295s 00:05:13.964 user 0m0.167s 00:05:13.964 sys 0m0.146s 00:05:13.964 09:49:50 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.964 09:49:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:13.964 ************************************ 00:05:13.964 END TEST rpc_client 00:05:13.964 ************************************ 00:05:13.964 09:49:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:13.964 09:49:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.964 09:49:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.964 09:49:50 -- common/autotest_common.sh@10 -- # set +x 00:05:13.964 ************************************ 00:05:13.964 START TEST json_config 00:05:13.964 ************************************ 00:05:13.964 09:49:50 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:13.964 09:49:50 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:13.964 09:49:50 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:13.964 09:49:50 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:13.964 09:49:50 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:13.964 09:49:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.964 09:49:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.964 09:49:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.964 09:49:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.964 09:49:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.964 09:49:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.964 09:49:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.964 09:49:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.964 09:49:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.964 09:49:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.964 09:49:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.964 09:49:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:13.964 09:49:50 json_config -- scripts/common.sh@345 -- # : 1 00:05:13.964 09:49:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.964 09:49:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.964 09:49:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:13.964 09:49:50 json_config -- scripts/common.sh@353 -- # local d=1 00:05:13.964 09:49:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.964 09:49:50 json_config -- scripts/common.sh@355 -- # echo 1 00:05:13.964 09:49:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.964 09:49:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:13.964 09:49:50 json_config -- scripts/common.sh@353 -- # local d=2 00:05:13.964 09:49:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.964 09:49:50 json_config -- scripts/common.sh@355 -- # echo 2 00:05:13.964 09:49:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.964 09:49:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.964 09:49:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.964 09:49:50 json_config -- scripts/common.sh@368 -- # return 0 00:05:13.964 09:49:50 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.964 09:49:50 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:13.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.964 --rc genhtml_branch_coverage=1 00:05:13.964 --rc genhtml_function_coverage=1 00:05:13.964 --rc genhtml_legend=1 00:05:13.964 --rc geninfo_all_blocks=1 00:05:13.964 --rc geninfo_unexecuted_blocks=1 00:05:13.964 00:05:13.964 ' 00:05:13.964 09:49:50 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:13.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.964 --rc genhtml_branch_coverage=1 00:05:13.964 --rc genhtml_function_coverage=1 00:05:13.964 --rc genhtml_legend=1 00:05:13.964 --rc geninfo_all_blocks=1 00:05:13.964 --rc geninfo_unexecuted_blocks=1 00:05:13.964 00:05:13.964 ' 00:05:13.964 09:49:50 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:13.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.964 --rc genhtml_branch_coverage=1 00:05:13.964 --rc genhtml_function_coverage=1 00:05:13.964 --rc genhtml_legend=1 00:05:13.964 --rc geninfo_all_blocks=1 00:05:13.964 --rc geninfo_unexecuted_blocks=1 00:05:13.964 00:05:13.964 ' 00:05:13.964 09:49:50 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:13.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.964 --rc genhtml_branch_coverage=1 00:05:13.964 --rc genhtml_function_coverage=1 00:05:13.964 --rc genhtml_legend=1 00:05:13.964 --rc geninfo_all_blocks=1 00:05:13.964 --rc geninfo_unexecuted_blocks=1 00:05:13.964 00:05:13.964 ' 00:05:13.964 09:49:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.964 09:49:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.224 09:49:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc3e89c5-a0c9-4b43-b383-a6b5a161abf4 00:05:14.224 09:49:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bc3e89c5-a0c9-4b43-b383-a6b5a161abf4 00:05:14.224 09:49:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.224 09:49:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.224 09:49:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.224 09:49:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.224 09:49:50 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.224 09:49:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:14.224 09:49:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.224 09:49:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.224 09:49:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.224 09:49:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.225 09:49:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.225 09:49:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.225 09:49:50 json_config -- paths/export.sh@5 -- # export PATH 00:05:14.225 09:49:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.225 09:49:50 json_config -- nvmf/common.sh@51 -- # : 0 00:05:14.225 09:49:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:14.225 09:49:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:14.225 09:49:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.225 09:49:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.225 09:49:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.225 09:49:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:14.225 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:14.225 09:49:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:14.225 09:49:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:14.225 09:49:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:14.225 09:49:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:14.225 09:49:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:14.225 09:49:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:14.225 09:49:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:14.225 09:49:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.225 WARNING: No tests are enabled so not running JSON configuration tests 00:05:14.225 09:49:50 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:14.225 09:49:50 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:14.225 00:05:14.225 real 0m0.217s 00:05:14.225 user 0m0.136s 00:05:14.225 sys 0m0.091s 00:05:14.225 09:49:50 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.225 09:49:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.225 ************************************ 00:05:14.225 END TEST json_config 00:05:14.225 ************************************ 00:05:14.225 09:49:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:14.225 09:49:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.225 09:49:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.225 09:49:50 -- common/autotest_common.sh@10 -- # set +x 00:05:14.225 ************************************ 00:05:14.225 START TEST json_config_extra_key 00:05:14.225 ************************************ 00:05:14.225 09:49:50 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:14.225 09:49:50 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.225 09:49:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.225 09:49:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.225 09:49:50 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.225 09:49:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:14.484 09:49:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.484 09:49:50 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.484 --rc genhtml_branch_coverage=1 00:05:14.484 --rc genhtml_function_coverage=1 00:05:14.484 --rc genhtml_legend=1 00:05:14.484 --rc geninfo_all_blocks=1 00:05:14.484 --rc geninfo_unexecuted_blocks=1 00:05:14.484 00:05:14.484 ' 00:05:14.484 09:49:50 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.484 --rc genhtml_branch_coverage=1 00:05:14.484 --rc genhtml_function_coverage=1 00:05:14.484 --rc genhtml_legend=1 00:05:14.484 --rc geninfo_all_blocks=1 00:05:14.484 --rc geninfo_unexecuted_blocks=1 00:05:14.484 00:05:14.484 ' 00:05:14.484 09:49:50 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.484 --rc genhtml_branch_coverage=1 00:05:14.484 --rc genhtml_function_coverage=1 00:05:14.484 --rc genhtml_legend=1 00:05:14.484 --rc geninfo_all_blocks=1 00:05:14.484 --rc geninfo_unexecuted_blocks=1 00:05:14.484 00:05:14.484 ' 00:05:14.484 09:49:50 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.484 --rc genhtml_branch_coverage=1 00:05:14.484 --rc genhtml_function_coverage=1 00:05:14.484 --rc genhtml_legend=1 00:05:14.484 --rc geninfo_all_blocks=1 00:05:14.484 --rc geninfo_unexecuted_blocks=1 00:05:14.484 00:05:14.484 ' 00:05:14.484 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc3e89c5-a0c9-4b43-b383-a6b5a161abf4 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bc3e89c5-a0c9-4b43-b383-a6b5a161abf4 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.484 09:49:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.484 09:49:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.485 09:49:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.485 09:49:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.485 09:49:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.485 09:49:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:14.485 09:49:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.485 09:49:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:14.485 09:49:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:14.485 09:49:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:14.485 09:49:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.485 09:49:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.485 09:49:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.485 09:49:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:14.485 09:49:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:14.485 09:49:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:14.485 09:49:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.485 INFO: launching applications... 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:14.485 09:49:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57200 00:05:14.485 Waiting for target to run... 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57200 /var/tmp/spdk_tgt.sock 00:05:14.485 09:49:50 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57200 ']' 00:05:14.485 09:49:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:14.485 09:49:50 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.485 09:49:50 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.485 09:49:50 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.485 09:49:50 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.485 09:49:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.485 [2024-10-21 09:49:50.979024] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:14.485 [2024-10-21 09:49:50.979150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57200 ] 00:05:15.053 [2024-10-21 09:49:51.515851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.053 [2024-10-21 09:49:51.639341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.991 09:49:52 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.991 09:49:52 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:15.991 00:05:15.991 09:49:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:15.991 INFO: shutting down applications... 00:05:15.991 09:49:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:15.991 09:49:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:15.991 09:49:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:15.991 09:49:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.991 09:49:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57200 ]] 00:05:15.991 09:49:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57200 00:05:15.991 09:49:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.991 09:49:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.991 09:49:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57200 00:05:15.991 09:49:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.558 09:49:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.558 09:49:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.558 09:49:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57200 00:05:16.558 09:49:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.124 09:49:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.124 09:49:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.124 09:49:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57200 00:05:17.124 09:49:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.382 09:49:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.382 09:49:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.382 09:49:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57200 00:05:17.382 09:49:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.947 09:49:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.948 09:49:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.948 09:49:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57200 00:05:17.948 09:49:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.513 09:49:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.514 09:49:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.514 09:49:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57200 00:05:18.514 09:49:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.080 09:49:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.080 09:49:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.080 09:49:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57200 00:05:19.080 09:49:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:19.080 09:49:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:19.080 09:49:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:19.080 SPDK target shutdown done 00:05:19.080 09:49:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:19.080 Success 00:05:19.080 09:49:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:19.080 00:05:19.080 real 0m4.819s 00:05:19.080 user 0m4.290s 00:05:19.080 sys 0m0.771s 00:05:19.080 09:49:55 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.080 09:49:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.080 ************************************ 00:05:19.080 END TEST json_config_extra_key 00:05:19.080 ************************************ 00:05:19.080 09:49:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.080 09:49:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.080 09:49:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.080 09:49:55 -- common/autotest_common.sh@10 -- # set +x 00:05:19.080 ************************************ 00:05:19.080 START TEST alias_rpc 00:05:19.080 ************************************ 00:05:19.080 09:49:55 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.080 * Looking for test storage... 00:05:19.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:19.080 09:49:55 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.081 09:49:55 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.081 09:49:55 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.339 09:49:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.339 --rc genhtml_branch_coverage=1 00:05:19.339 --rc genhtml_function_coverage=1 00:05:19.339 --rc genhtml_legend=1 00:05:19.339 --rc geninfo_all_blocks=1 00:05:19.339 --rc geninfo_unexecuted_blocks=1 00:05:19.339 00:05:19.339 ' 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.339 --rc genhtml_branch_coverage=1 00:05:19.339 --rc genhtml_function_coverage=1 00:05:19.339 --rc genhtml_legend=1 00:05:19.339 --rc geninfo_all_blocks=1 00:05:19.339 --rc geninfo_unexecuted_blocks=1 00:05:19.339 00:05:19.339 ' 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.339 --rc genhtml_branch_coverage=1 00:05:19.339 --rc genhtml_function_coverage=1 00:05:19.339 --rc genhtml_legend=1 00:05:19.339 --rc geninfo_all_blocks=1 00:05:19.339 --rc geninfo_unexecuted_blocks=1 00:05:19.339 00:05:19.339 ' 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.339 --rc genhtml_branch_coverage=1 00:05:19.339 --rc genhtml_function_coverage=1 00:05:19.339 --rc genhtml_legend=1 00:05:19.339 --rc geninfo_all_blocks=1 00:05:19.339 --rc geninfo_unexecuted_blocks=1 00:05:19.339 00:05:19.339 ' 00:05:19.339 09:49:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:19.339 09:49:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57311 00:05:19.339 09:49:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.339 09:49:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57311 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57311 ']' 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.339 09:49:55 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.340 09:49:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.340 [2024-10-21 09:49:55.862258] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:19.340 [2024-10-21 09:49:55.862752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57311 ] 00:05:19.598 [2024-10-21 09:49:56.026731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.598 [2024-10-21 09:49:56.171780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:20.973 09:49:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:20.973 09:49:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57311 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57311 ']' 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57311 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57311 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.973 killing process with pid 57311 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57311' 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@969 -- # kill 57311 00:05:20.973 09:49:57 alias_rpc -- common/autotest_common.sh@974 -- # wait 57311 00:05:23.512 00:05:23.512 real 0m4.553s 00:05:23.512 user 0m4.416s 00:05:23.512 sys 0m0.698s 00:05:23.512 09:50:00 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.512 09:50:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.512 ************************************ 00:05:23.512 END TEST alias_rpc 00:05:23.512 ************************************ 00:05:23.772 09:50:00 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:23.772 09:50:00 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:23.772 09:50:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.772 09:50:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.772 09:50:00 -- common/autotest_common.sh@10 -- # set +x 00:05:23.772 ************************************ 00:05:23.772 START TEST spdkcli_tcp 00:05:23.772 ************************************ 00:05:23.772 09:50:00 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:23.772 * Looking for test storage... 00:05:23.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:23.772 09:50:00 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.772 09:50:00 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.772 09:50:00 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.772 09:50:00 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.772 09:50:00 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.032 09:50:00 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:24.032 09:50:00 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.032 09:50:00 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:24.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.032 --rc genhtml_branch_coverage=1 00:05:24.032 --rc genhtml_function_coverage=1 00:05:24.032 --rc genhtml_legend=1 00:05:24.032 --rc geninfo_all_blocks=1 00:05:24.032 --rc geninfo_unexecuted_blocks=1 00:05:24.032 00:05:24.032 ' 00:05:24.032 09:50:00 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:24.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.032 --rc genhtml_branch_coverage=1 00:05:24.032 --rc genhtml_function_coverage=1 00:05:24.032 --rc genhtml_legend=1 00:05:24.032 --rc geninfo_all_blocks=1 00:05:24.032 --rc geninfo_unexecuted_blocks=1 00:05:24.032 00:05:24.032 ' 00:05:24.032 09:50:00 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:24.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.032 --rc genhtml_branch_coverage=1 00:05:24.032 --rc genhtml_function_coverage=1 00:05:24.032 --rc genhtml_legend=1 00:05:24.033 --rc geninfo_all_blocks=1 00:05:24.033 --rc geninfo_unexecuted_blocks=1 00:05:24.033 00:05:24.033 ' 00:05:24.033 09:50:00 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:24.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.033 --rc genhtml_branch_coverage=1 00:05:24.033 --rc genhtml_function_coverage=1 00:05:24.033 --rc genhtml_legend=1 00:05:24.033 --rc geninfo_all_blocks=1 00:05:24.033 --rc geninfo_unexecuted_blocks=1 00:05:24.033 00:05:24.033 ' 00:05:24.033 09:50:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:24.033 09:50:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:24.033 09:50:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:24.033 09:50:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:24.033 09:50:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:24.033 09:50:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:24.033 09:50:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:24.033 09:50:00 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.033 09:50:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.033 09:50:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57424 00:05:24.033 09:50:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:24.033 09:50:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57424 00:05:24.033 09:50:00 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57424 ']' 00:05:24.033 09:50:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.033 09:50:00 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.033 09:50:00 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.033 09:50:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.033 09:50:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.033 [2024-10-21 09:50:00.491593] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:24.033 [2024-10-21 09:50:00.491715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57424 ] 00:05:24.294 [2024-10-21 09:50:00.656358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.294 [2024-10-21 09:50:00.807685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.294 [2024-10-21 09:50:00.807733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.677 09:50:01 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.677 09:50:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:25.677 09:50:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57446 00:05:25.677 09:50:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:25.677 09:50:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:25.677 [ 00:05:25.677 "bdev_malloc_delete", 00:05:25.677 "bdev_malloc_create", 00:05:25.677 "bdev_null_resize", 00:05:25.677 "bdev_null_delete", 00:05:25.677 "bdev_null_create", 00:05:25.677 "bdev_nvme_cuse_unregister", 00:05:25.677 "bdev_nvme_cuse_register", 00:05:25.677 "bdev_opal_new_user", 00:05:25.677 "bdev_opal_set_lock_state", 00:05:25.677 "bdev_opal_delete", 00:05:25.677 "bdev_opal_get_info", 00:05:25.677 "bdev_opal_create", 00:05:25.677 "bdev_nvme_opal_revert", 00:05:25.677 "bdev_nvme_opal_init", 00:05:25.677 "bdev_nvme_send_cmd", 00:05:25.677 "bdev_nvme_set_keys", 00:05:25.677 "bdev_nvme_get_path_iostat", 00:05:25.677 "bdev_nvme_get_mdns_discovery_info", 00:05:25.677 "bdev_nvme_stop_mdns_discovery", 00:05:25.677 "bdev_nvme_start_mdns_discovery", 00:05:25.677 "bdev_nvme_set_multipath_policy", 00:05:25.677 "bdev_nvme_set_preferred_path", 00:05:25.677 "bdev_nvme_get_io_paths", 00:05:25.677 "bdev_nvme_remove_error_injection", 00:05:25.677 "bdev_nvme_add_error_injection", 00:05:25.677 "bdev_nvme_get_discovery_info", 00:05:25.677 "bdev_nvme_stop_discovery", 00:05:25.677 "bdev_nvme_start_discovery", 00:05:25.677 "bdev_nvme_get_controller_health_info", 00:05:25.677 "bdev_nvme_disable_controller", 00:05:25.677 "bdev_nvme_enable_controller", 00:05:25.677 "bdev_nvme_reset_controller", 00:05:25.677 "bdev_nvme_get_transport_statistics", 00:05:25.677 "bdev_nvme_apply_firmware", 00:05:25.677 "bdev_nvme_detach_controller", 00:05:25.677 "bdev_nvme_get_controllers", 00:05:25.677 "bdev_nvme_attach_controller", 00:05:25.677 "bdev_nvme_set_hotplug", 00:05:25.677 "bdev_nvme_set_options", 00:05:25.677 "bdev_passthru_delete", 00:05:25.677 "bdev_passthru_create", 00:05:25.677 "bdev_lvol_set_parent_bdev", 00:05:25.677 "bdev_lvol_set_parent", 00:05:25.677 "bdev_lvol_check_shallow_copy", 00:05:25.677 "bdev_lvol_start_shallow_copy", 00:05:25.677 "bdev_lvol_grow_lvstore", 00:05:25.677 "bdev_lvol_get_lvols", 00:05:25.677 "bdev_lvol_get_lvstores", 00:05:25.677 "bdev_lvol_delete", 00:05:25.677 "bdev_lvol_set_read_only", 00:05:25.677 "bdev_lvol_resize", 00:05:25.677 "bdev_lvol_decouple_parent", 00:05:25.677 "bdev_lvol_inflate", 00:05:25.677 "bdev_lvol_rename", 00:05:25.677 "bdev_lvol_clone_bdev", 00:05:25.677 "bdev_lvol_clone", 00:05:25.677 "bdev_lvol_snapshot", 00:05:25.677 "bdev_lvol_create", 00:05:25.677 "bdev_lvol_delete_lvstore", 00:05:25.677 "bdev_lvol_rename_lvstore", 00:05:25.677 "bdev_lvol_create_lvstore", 00:05:25.677 "bdev_raid_set_options", 00:05:25.677 "bdev_raid_remove_base_bdev", 00:05:25.677 "bdev_raid_add_base_bdev", 00:05:25.677 "bdev_raid_delete", 00:05:25.677 "bdev_raid_create", 00:05:25.677 "bdev_raid_get_bdevs", 00:05:25.677 "bdev_error_inject_error", 00:05:25.677 "bdev_error_delete", 00:05:25.677 "bdev_error_create", 00:05:25.677 "bdev_split_delete", 00:05:25.677 "bdev_split_create", 00:05:25.677 "bdev_delay_delete", 00:05:25.677 "bdev_delay_create", 00:05:25.677 "bdev_delay_update_latency", 00:05:25.677 "bdev_zone_block_delete", 00:05:25.677 "bdev_zone_block_create", 00:05:25.677 "blobfs_create", 00:05:25.677 "blobfs_detect", 00:05:25.677 "blobfs_set_cache_size", 00:05:25.677 "bdev_aio_delete", 00:05:25.677 "bdev_aio_rescan", 00:05:25.677 "bdev_aio_create", 00:05:25.677 "bdev_ftl_set_property", 00:05:25.677 "bdev_ftl_get_properties", 00:05:25.677 "bdev_ftl_get_stats", 00:05:25.677 "bdev_ftl_unmap", 00:05:25.677 "bdev_ftl_unload", 00:05:25.677 "bdev_ftl_delete", 00:05:25.677 "bdev_ftl_load", 00:05:25.677 "bdev_ftl_create", 00:05:25.677 "bdev_virtio_attach_controller", 00:05:25.677 "bdev_virtio_scsi_get_devices", 00:05:25.677 "bdev_virtio_detach_controller", 00:05:25.677 "bdev_virtio_blk_set_hotplug", 00:05:25.677 "bdev_iscsi_delete", 00:05:25.677 "bdev_iscsi_create", 00:05:25.677 "bdev_iscsi_set_options", 00:05:25.677 "accel_error_inject_error", 00:05:25.677 "ioat_scan_accel_module", 00:05:25.677 "dsa_scan_accel_module", 00:05:25.677 "iaa_scan_accel_module", 00:05:25.677 "keyring_file_remove_key", 00:05:25.677 "keyring_file_add_key", 00:05:25.677 "keyring_linux_set_options", 00:05:25.677 "fsdev_aio_delete", 00:05:25.677 "fsdev_aio_create", 00:05:25.677 "iscsi_get_histogram", 00:05:25.677 "iscsi_enable_histogram", 00:05:25.677 "iscsi_set_options", 00:05:25.677 "iscsi_get_auth_groups", 00:05:25.677 "iscsi_auth_group_remove_secret", 00:05:25.677 "iscsi_auth_group_add_secret", 00:05:25.677 "iscsi_delete_auth_group", 00:05:25.677 "iscsi_create_auth_group", 00:05:25.677 "iscsi_set_discovery_auth", 00:05:25.677 "iscsi_get_options", 00:05:25.677 "iscsi_target_node_request_logout", 00:05:25.677 "iscsi_target_node_set_redirect", 00:05:25.677 "iscsi_target_node_set_auth", 00:05:25.677 "iscsi_target_node_add_lun", 00:05:25.677 "iscsi_get_stats", 00:05:25.677 "iscsi_get_connections", 00:05:25.677 "iscsi_portal_group_set_auth", 00:05:25.677 "iscsi_start_portal_group", 00:05:25.677 "iscsi_delete_portal_group", 00:05:25.677 "iscsi_create_portal_group", 00:05:25.677 "iscsi_get_portal_groups", 00:05:25.677 "iscsi_delete_target_node", 00:05:25.677 "iscsi_target_node_remove_pg_ig_maps", 00:05:25.677 "iscsi_target_node_add_pg_ig_maps", 00:05:25.677 "iscsi_create_target_node", 00:05:25.677 "iscsi_get_target_nodes", 00:05:25.677 "iscsi_delete_initiator_group", 00:05:25.677 "iscsi_initiator_group_remove_initiators", 00:05:25.677 "iscsi_initiator_group_add_initiators", 00:05:25.677 "iscsi_create_initiator_group", 00:05:25.677 "iscsi_get_initiator_groups", 00:05:25.677 "nvmf_set_crdt", 00:05:25.677 "nvmf_set_config", 00:05:25.677 "nvmf_set_max_subsystems", 00:05:25.677 "nvmf_stop_mdns_prr", 00:05:25.677 "nvmf_publish_mdns_prr", 00:05:25.677 "nvmf_subsystem_get_listeners", 00:05:25.677 "nvmf_subsystem_get_qpairs", 00:05:25.677 "nvmf_subsystem_get_controllers", 00:05:25.677 "nvmf_get_stats", 00:05:25.677 "nvmf_get_transports", 00:05:25.677 "nvmf_create_transport", 00:05:25.677 "nvmf_get_targets", 00:05:25.677 "nvmf_delete_target", 00:05:25.677 "nvmf_create_target", 00:05:25.677 "nvmf_subsystem_allow_any_host", 00:05:25.677 "nvmf_subsystem_set_keys", 00:05:25.677 "nvmf_subsystem_remove_host", 00:05:25.677 "nvmf_subsystem_add_host", 00:05:25.677 "nvmf_ns_remove_host", 00:05:25.677 "nvmf_ns_add_host", 00:05:25.677 "nvmf_subsystem_remove_ns", 00:05:25.677 "nvmf_subsystem_set_ns_ana_group", 00:05:25.677 "nvmf_subsystem_add_ns", 00:05:25.677 "nvmf_subsystem_listener_set_ana_state", 00:05:25.677 "nvmf_discovery_get_referrals", 00:05:25.677 "nvmf_discovery_remove_referral", 00:05:25.677 "nvmf_discovery_add_referral", 00:05:25.677 "nvmf_subsystem_remove_listener", 00:05:25.677 "nvmf_subsystem_add_listener", 00:05:25.677 "nvmf_delete_subsystem", 00:05:25.677 "nvmf_create_subsystem", 00:05:25.677 "nvmf_get_subsystems", 00:05:25.677 "env_dpdk_get_mem_stats", 00:05:25.677 "nbd_get_disks", 00:05:25.677 "nbd_stop_disk", 00:05:25.677 "nbd_start_disk", 00:05:25.677 "ublk_recover_disk", 00:05:25.677 "ublk_get_disks", 00:05:25.677 "ublk_stop_disk", 00:05:25.677 "ublk_start_disk", 00:05:25.677 "ublk_destroy_target", 00:05:25.677 "ublk_create_target", 00:05:25.677 "virtio_blk_create_transport", 00:05:25.677 "virtio_blk_get_transports", 00:05:25.677 "vhost_controller_set_coalescing", 00:05:25.677 "vhost_get_controllers", 00:05:25.677 "vhost_delete_controller", 00:05:25.677 "vhost_create_blk_controller", 00:05:25.677 "vhost_scsi_controller_remove_target", 00:05:25.677 "vhost_scsi_controller_add_target", 00:05:25.677 "vhost_start_scsi_controller", 00:05:25.677 "vhost_create_scsi_controller", 00:05:25.677 "thread_set_cpumask", 00:05:25.677 "scheduler_set_options", 00:05:25.677 "framework_get_governor", 00:05:25.677 "framework_get_scheduler", 00:05:25.677 "framework_set_scheduler", 00:05:25.677 "framework_get_reactors", 00:05:25.677 "thread_get_io_channels", 00:05:25.677 "thread_get_pollers", 00:05:25.677 "thread_get_stats", 00:05:25.677 "framework_monitor_context_switch", 00:05:25.677 "spdk_kill_instance", 00:05:25.677 "log_enable_timestamps", 00:05:25.677 "log_get_flags", 00:05:25.677 "log_clear_flag", 00:05:25.677 "log_set_flag", 00:05:25.677 "log_get_level", 00:05:25.677 "log_set_level", 00:05:25.677 "log_get_print_level", 00:05:25.677 "log_set_print_level", 00:05:25.677 "framework_enable_cpumask_locks", 00:05:25.677 "framework_disable_cpumask_locks", 00:05:25.677 "framework_wait_init", 00:05:25.677 "framework_start_init", 00:05:25.677 "scsi_get_devices", 00:05:25.677 "bdev_get_histogram", 00:05:25.677 "bdev_enable_histogram", 00:05:25.677 "bdev_set_qos_limit", 00:05:25.677 "bdev_set_qd_sampling_period", 00:05:25.677 "bdev_get_bdevs", 00:05:25.677 "bdev_reset_iostat", 00:05:25.677 "bdev_get_iostat", 00:05:25.677 "bdev_examine", 00:05:25.677 "bdev_wait_for_examine", 00:05:25.677 "bdev_set_options", 00:05:25.678 "accel_get_stats", 00:05:25.678 "accel_set_options", 00:05:25.678 "accel_set_driver", 00:05:25.678 "accel_crypto_key_destroy", 00:05:25.678 "accel_crypto_keys_get", 00:05:25.678 "accel_crypto_key_create", 00:05:25.678 "accel_assign_opc", 00:05:25.678 "accel_get_module_info", 00:05:25.678 "accel_get_opc_assignments", 00:05:25.678 "vmd_rescan", 00:05:25.678 "vmd_remove_device", 00:05:25.678 "vmd_enable", 00:05:25.678 "sock_get_default_impl", 00:05:25.678 "sock_set_default_impl", 00:05:25.678 "sock_impl_set_options", 00:05:25.678 "sock_impl_get_options", 00:05:25.678 "iobuf_get_stats", 00:05:25.678 "iobuf_set_options", 00:05:25.678 "keyring_get_keys", 00:05:25.678 "framework_get_pci_devices", 00:05:25.678 "framework_get_config", 00:05:25.678 "framework_get_subsystems", 00:05:25.678 "fsdev_set_opts", 00:05:25.678 "fsdev_get_opts", 00:05:25.678 "trace_get_info", 00:05:25.678 "trace_get_tpoint_group_mask", 00:05:25.678 "trace_disable_tpoint_group", 00:05:25.678 "trace_enable_tpoint_group", 00:05:25.678 "trace_clear_tpoint_mask", 00:05:25.678 "trace_set_tpoint_mask", 00:05:25.678 "notify_get_notifications", 00:05:25.678 "notify_get_types", 00:05:25.678 "spdk_get_version", 00:05:25.678 "rpc_get_methods" 00:05:25.678 ] 00:05:25.678 09:50:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.678 09:50:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:25.678 09:50:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57424 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57424 ']' 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57424 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57424 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.678 killing process with pid 57424 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57424' 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57424 00:05:25.678 09:50:02 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57424 00:05:28.215 00:05:28.215 real 0m4.632s 00:05:28.215 user 0m8.083s 00:05:28.215 sys 0m0.799s 00:05:28.215 09:50:04 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.215 09:50:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.215 ************************************ 00:05:28.215 END TEST spdkcli_tcp 00:05:28.215 ************************************ 00:05:28.475 09:50:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.475 09:50:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.475 09:50:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.475 09:50:04 -- common/autotest_common.sh@10 -- # set +x 00:05:28.475 ************************************ 00:05:28.475 START TEST dpdk_mem_utility 00:05:28.475 ************************************ 00:05:28.475 09:50:04 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.475 * Looking for test storage... 00:05:28.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:28.475 09:50:04 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:28.475 09:50:04 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:28.475 09:50:04 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:28.475 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.475 09:50:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:28.735 09:50:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:28.735 09:50:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.735 09:50:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:28.735 09:50:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.735 09:50:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.735 09:50:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.735 09:50:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:28.735 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.735 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:28.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.735 --rc genhtml_branch_coverage=1 00:05:28.735 --rc genhtml_function_coverage=1 00:05:28.735 --rc genhtml_legend=1 00:05:28.735 --rc geninfo_all_blocks=1 00:05:28.735 --rc geninfo_unexecuted_blocks=1 00:05:28.735 00:05:28.735 ' 00:05:28.735 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:28.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.735 --rc genhtml_branch_coverage=1 00:05:28.735 --rc genhtml_function_coverage=1 00:05:28.735 --rc genhtml_legend=1 00:05:28.735 --rc geninfo_all_blocks=1 00:05:28.735 --rc geninfo_unexecuted_blocks=1 00:05:28.735 00:05:28.735 ' 00:05:28.735 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:28.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.735 --rc genhtml_branch_coverage=1 00:05:28.735 --rc genhtml_function_coverage=1 00:05:28.735 --rc genhtml_legend=1 00:05:28.735 --rc geninfo_all_blocks=1 00:05:28.735 --rc geninfo_unexecuted_blocks=1 00:05:28.735 00:05:28.735 ' 00:05:28.735 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:28.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.735 --rc genhtml_branch_coverage=1 00:05:28.735 --rc genhtml_function_coverage=1 00:05:28.735 --rc genhtml_legend=1 00:05:28.735 --rc geninfo_all_blocks=1 00:05:28.735 --rc geninfo_unexecuted_blocks=1 00:05:28.735 00:05:28.735 ' 00:05:28.735 09:50:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:28.735 09:50:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57551 00:05:28.736 09:50:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.736 09:50:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57551 00:05:28.736 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57551 ']' 00:05:28.736 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.736 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.736 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.736 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.736 09:50:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.736 [2024-10-21 09:50:05.176099] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:28.736 [2024-10-21 09:50:05.176226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57551 ] 00:05:28.995 [2024-10-21 09:50:05.338161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.995 [2024-10-21 09:50:05.478480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.935 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.935 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:30.196 09:50:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:30.196 09:50:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:30.196 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.196 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.196 { 00:05:30.196 "filename": "/tmp/spdk_mem_dump.txt" 00:05:30.196 } 00:05:30.196 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.196 09:50:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:30.196 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:30.196 1 heaps totaling size 816.000000 MiB 00:05:30.196 size: 816.000000 MiB heap id: 0 00:05:30.196 end heaps---------- 00:05:30.196 9 mempools totaling size 595.772034 MiB 00:05:30.196 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:30.196 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:30.196 size: 92.545471 MiB name: bdev_io_57551 00:05:30.196 size: 50.003479 MiB name: msgpool_57551 00:05:30.196 size: 36.509338 MiB name: fsdev_io_57551 00:05:30.196 size: 21.763794 MiB name: PDU_Pool 00:05:30.196 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:30.196 size: 4.133484 MiB name: evtpool_57551 00:05:30.196 size: 0.026123 MiB name: Session_Pool 00:05:30.196 end mempools------- 00:05:30.196 6 memzones totaling size 4.142822 MiB 00:05:30.196 size: 1.000366 MiB name: RG_ring_0_57551 00:05:30.196 size: 1.000366 MiB name: RG_ring_1_57551 00:05:30.196 size: 1.000366 MiB name: RG_ring_4_57551 00:05:30.196 size: 1.000366 MiB name: RG_ring_5_57551 00:05:30.196 size: 0.125366 MiB name: RG_ring_2_57551 00:05:30.196 size: 0.015991 MiB name: RG_ring_3_57551 00:05:30.196 end memzones------- 00:05:30.196 09:50:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:30.196 heap id: 0 total size: 816.000000 MiB number of busy elements: 322 number of free elements: 18 00:05:30.196 list of free elements. size: 16.789673 MiB 00:05:30.196 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:30.196 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:30.196 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:30.196 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:30.196 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:30.196 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:30.196 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:30.196 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:30.196 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:30.196 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:30.196 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:30.196 element at address: 0x20001ac00000 with size: 0.559998 MiB 00:05:30.196 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:30.196 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:30.196 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:30.196 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:30.196 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:30.196 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:30.196 list of standard malloc elements. size: 199.289429 MiB 00:05:30.196 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:30.196 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:30.196 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:30.196 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:30.196 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:30.196 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:30.196 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:30.196 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:30.196 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:30.196 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:30.197 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:30.197 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:30.197 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:30.198 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:30.198 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:30.198 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:30.199 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:30.199 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:30.199 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:30.199 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:30.199 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:30.199 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:30.199 list of memzone associated elements. size: 599.920898 MiB 00:05:30.199 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:30.199 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:30.199 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:30.199 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:30.199 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:30.199 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57551_0 00:05:30.199 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:30.199 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57551_0 00:05:30.199 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:30.199 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57551_0 00:05:30.199 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:30.199 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:30.199 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:30.199 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:30.199 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:30.199 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57551_0 00:05:30.199 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:30.199 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57551 00:05:30.199 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:30.199 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57551 00:05:30.199 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:30.199 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:30.199 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:30.199 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:30.199 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:30.199 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:30.199 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:30.199 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:30.199 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:30.199 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57551 00:05:30.199 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:30.199 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57551 00:05:30.199 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:30.199 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57551 00:05:30.199 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:30.199 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57551 00:05:30.199 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:30.199 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57551 00:05:30.199 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:30.199 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57551 00:05:30.199 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:30.199 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:30.199 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:30.199 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:30.199 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:30.199 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:30.199 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:30.199 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57551 00:05:30.199 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:30.199 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57551 00:05:30.199 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:30.199 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:30.199 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:30.199 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:30.199 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:30.199 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57551 00:05:30.199 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:30.199 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:30.199 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:30.199 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57551 00:05:30.199 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:30.199 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57551 00:05:30.199 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:30.199 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57551 00:05:30.199 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:30.199 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:30.199 09:50:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:30.199 09:50:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57551 00:05:30.199 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57551 ']' 00:05:30.199 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57551 00:05:30.199 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:30.199 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.199 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57551 00:05:30.199 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.199 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.199 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57551' 00:05:30.199 killing process with pid 57551 00:05:30.199 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57551 00:05:30.199 09:50:06 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57551 00:05:32.738 00:05:32.738 real 0m4.470s 00:05:32.738 user 0m4.219s 00:05:32.738 sys 0m0.705s 00:05:32.738 09:50:09 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.738 09:50:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.738 ************************************ 00:05:32.738 END TEST dpdk_mem_utility 00:05:32.738 ************************************ 00:05:33.040 09:50:09 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:33.040 09:50:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.040 09:50:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.040 09:50:09 -- common/autotest_common.sh@10 -- # set +x 00:05:33.040 ************************************ 00:05:33.040 START TEST event 00:05:33.040 ************************************ 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:33.040 * Looking for test storage... 00:05:33.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.040 09:50:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.040 09:50:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.040 09:50:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.040 09:50:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.040 09:50:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.040 09:50:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.040 09:50:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.040 09:50:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.040 09:50:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.040 09:50:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.040 09:50:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.040 09:50:09 event -- scripts/common.sh@344 -- # case "$op" in 00:05:33.040 09:50:09 event -- scripts/common.sh@345 -- # : 1 00:05:33.040 09:50:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.040 09:50:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.040 09:50:09 event -- scripts/common.sh@365 -- # decimal 1 00:05:33.040 09:50:09 event -- scripts/common.sh@353 -- # local d=1 00:05:33.040 09:50:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.040 09:50:09 event -- scripts/common.sh@355 -- # echo 1 00:05:33.040 09:50:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.040 09:50:09 event -- scripts/common.sh@366 -- # decimal 2 00:05:33.040 09:50:09 event -- scripts/common.sh@353 -- # local d=2 00:05:33.040 09:50:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.040 09:50:09 event -- scripts/common.sh@355 -- # echo 2 00:05:33.040 09:50:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.040 09:50:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.040 09:50:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.040 09:50:09 event -- scripts/common.sh@368 -- # return 0 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.040 --rc genhtml_branch_coverage=1 00:05:33.040 --rc genhtml_function_coverage=1 00:05:33.040 --rc genhtml_legend=1 00:05:33.040 --rc geninfo_all_blocks=1 00:05:33.040 --rc geninfo_unexecuted_blocks=1 00:05:33.040 00:05:33.040 ' 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.040 --rc genhtml_branch_coverage=1 00:05:33.040 --rc genhtml_function_coverage=1 00:05:33.040 --rc genhtml_legend=1 00:05:33.040 --rc geninfo_all_blocks=1 00:05:33.040 --rc geninfo_unexecuted_blocks=1 00:05:33.040 00:05:33.040 ' 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.040 --rc genhtml_branch_coverage=1 00:05:33.040 --rc genhtml_function_coverage=1 00:05:33.040 --rc genhtml_legend=1 00:05:33.040 --rc geninfo_all_blocks=1 00:05:33.040 --rc geninfo_unexecuted_blocks=1 00:05:33.040 00:05:33.040 ' 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.040 --rc genhtml_branch_coverage=1 00:05:33.040 --rc genhtml_function_coverage=1 00:05:33.040 --rc genhtml_legend=1 00:05:33.040 --rc geninfo_all_blocks=1 00:05:33.040 --rc geninfo_unexecuted_blocks=1 00:05:33.040 00:05:33.040 ' 00:05:33.040 09:50:09 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:33.040 09:50:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:33.040 09:50:09 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:33.040 09:50:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.040 09:50:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.040 ************************************ 00:05:33.040 START TEST event_perf 00:05:33.040 ************************************ 00:05:33.040 09:50:09 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.299 Running I/O for 1 seconds...[2024-10-21 09:50:09.648580] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:33.299 [2024-10-21 09:50:09.649113] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57665 ] 00:05:33.299 [2024-10-21 09:50:09.812845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.558 [2024-10-21 09:50:09.964153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.558 [2024-10-21 09:50:09.964286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.558 [2024-10-21 09:50:09.966109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.558 Running I/O for 1 seconds...[2024-10-21 09:50:09.966121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.934 00:05:34.934 lcore 0: 83235 00:05:34.934 lcore 1: 83238 00:05:34.934 lcore 2: 83241 00:05:34.934 lcore 3: 83240 00:05:34.934 done. 00:05:34.934 00:05:34.934 real 0m1.651s 00:05:34.934 user 0m4.392s 00:05:34.934 sys 0m0.126s 00:05:34.934 09:50:11 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.934 09:50:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.934 ************************************ 00:05:34.934 END TEST event_perf 00:05:34.934 ************************************ 00:05:34.934 09:50:11 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:34.934 09:50:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:34.934 09:50:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.934 09:50:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.934 ************************************ 00:05:34.934 START TEST event_reactor 00:05:34.934 ************************************ 00:05:34.934 09:50:11 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:34.934 [2024-10-21 09:50:11.367771] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:34.934 [2024-10-21 09:50:11.367949] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57710 ] 00:05:35.194 [2024-10-21 09:50:11.530032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.194 [2024-10-21 09:50:11.671445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.600 test_start 00:05:36.600 oneshot 00:05:36.600 tick 100 00:05:36.600 tick 100 00:05:36.600 tick 250 00:05:36.600 tick 100 00:05:36.600 tick 100 00:05:36.600 tick 100 00:05:36.600 tick 250 00:05:36.600 tick 500 00:05:36.600 tick 100 00:05:36.600 tick 100 00:05:36.600 tick 250 00:05:36.600 tick 100 00:05:36.600 tick 100 00:05:36.600 test_end 00:05:36.600 00:05:36.600 real 0m1.622s 00:05:36.600 user 0m1.412s 00:05:36.600 sys 0m0.103s 00:05:36.600 09:50:12 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.600 ************************************ 00:05:36.600 END TEST event_reactor 00:05:36.600 ************************************ 00:05:36.600 09:50:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:36.600 09:50:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:36.600 09:50:12 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:36.600 09:50:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.600 09:50:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.600 ************************************ 00:05:36.600 START TEST event_reactor_perf 00:05:36.600 ************************************ 00:05:36.600 09:50:13 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:36.600 [2024-10-21 09:50:13.059539] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:36.600 [2024-10-21 09:50:13.059734] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57741 ] 00:05:36.860 [2024-10-21 09:50:13.226329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.860 [2024-10-21 09:50:13.369647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.239 test_start 00:05:38.239 test_end 00:05:38.239 Performance: 389698 events per second 00:05:38.239 00:05:38.239 real 0m1.622s 00:05:38.239 user 0m1.408s 00:05:38.239 sys 0m0.106s 00:05:38.239 09:50:14 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.239 ************************************ 00:05:38.239 END TEST event_reactor_perf 00:05:38.239 ************************************ 00:05:38.239 09:50:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.239 09:50:14 event -- event/event.sh@49 -- # uname -s 00:05:38.239 09:50:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:38.239 09:50:14 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:38.239 09:50:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.239 09:50:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.239 09:50:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.239 ************************************ 00:05:38.239 START TEST event_scheduler 00:05:38.239 ************************************ 00:05:38.239 09:50:14 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:38.239 * Looking for test storage... 00:05:38.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:38.239 09:50:14 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:38.239 09:50:14 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:38.239 09:50:14 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:38.499 09:50:14 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:38.499 09:50:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.500 09:50:14 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:38.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.500 --rc genhtml_branch_coverage=1 00:05:38.500 --rc genhtml_function_coverage=1 00:05:38.500 --rc genhtml_legend=1 00:05:38.500 --rc geninfo_all_blocks=1 00:05:38.500 --rc geninfo_unexecuted_blocks=1 00:05:38.500 00:05:38.500 ' 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:38.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.500 --rc genhtml_branch_coverage=1 00:05:38.500 --rc genhtml_function_coverage=1 00:05:38.500 --rc genhtml_legend=1 00:05:38.500 --rc geninfo_all_blocks=1 00:05:38.500 --rc geninfo_unexecuted_blocks=1 00:05:38.500 00:05:38.500 ' 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:38.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.500 --rc genhtml_branch_coverage=1 00:05:38.500 --rc genhtml_function_coverage=1 00:05:38.500 --rc genhtml_legend=1 00:05:38.500 --rc geninfo_all_blocks=1 00:05:38.500 --rc geninfo_unexecuted_blocks=1 00:05:38.500 00:05:38.500 ' 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:38.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.500 --rc genhtml_branch_coverage=1 00:05:38.500 --rc genhtml_function_coverage=1 00:05:38.500 --rc genhtml_legend=1 00:05:38.500 --rc geninfo_all_blocks=1 00:05:38.500 --rc geninfo_unexecuted_blocks=1 00:05:38.500 00:05:38.500 ' 00:05:38.500 09:50:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:38.500 09:50:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57817 00:05:38.500 09:50:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:38.500 09:50:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.500 09:50:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57817 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 57817 ']' 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.500 09:50:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.500 [2024-10-21 09:50:15.016325] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:38.500 [2024-10-21 09:50:15.016500] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57817 ] 00:05:38.760 [2024-10-21 09:50:15.180704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.760 [2024-10-21 09:50:15.308469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.760 [2024-10-21 09:50:15.308695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.760 [2024-10-21 09:50:15.308782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.760 [2024-10-21 09:50:15.308751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.329 09:50:15 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.329 09:50:15 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:39.329 09:50:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:39.329 09:50:15 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.329 09:50:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.329 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:39.329 POWER: Cannot set governor of lcore 0 to userspace 00:05:39.329 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:39.329 POWER: Cannot set governor of lcore 0 to performance 00:05:39.329 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:39.329 POWER: Cannot set governor of lcore 0 to userspace 00:05:39.329 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:39.329 POWER: Cannot set governor of lcore 0 to userspace 00:05:39.329 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:39.329 POWER: Unable to set Power Management Environment for lcore 0 00:05:39.329 [2024-10-21 09:50:15.850118] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:39.329 [2024-10-21 09:50:15.850162] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:39.329 [2024-10-21 09:50:15.850201] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:39.329 [2024-10-21 09:50:15.850244] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:39.329 [2024-10-21 09:50:15.850277] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:39.329 [2024-10-21 09:50:15.850311] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:39.329 09:50:15 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.329 09:50:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:39.329 09:50:15 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.329 09:50:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 [2024-10-21 09:50:16.188323] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:39.898 09:50:16 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.898 09:50:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:39.898 09:50:16 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.898 09:50:16 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.898 09:50:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 ************************************ 00:05:39.898 START TEST scheduler_create_thread 00:05:39.898 ************************************ 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 2 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 3 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 4 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 5 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 6 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 7 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 8 00:05:39.898 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.899 9 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.899 10 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.899 09:50:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.276 09:50:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.276 09:50:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:41.276 09:50:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:41.276 09:50:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.276 09:50:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.212 09:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.212 09:50:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:42.212 09:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.212 09:50:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.780 09:50:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.780 09:50:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.780 09:50:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.780 09:50:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.780 09:50:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.720 ************************************ 00:05:43.720 END TEST scheduler_create_thread 00:05:43.720 ************************************ 00:05:43.720 09:50:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.720 00:05:43.720 real 0m3.883s 00:05:43.720 user 0m0.029s 00:05:43.720 sys 0m0.008s 00:05:43.720 09:50:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.720 09:50:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.720 09:50:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.720 09:50:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57817 00:05:43.720 09:50:20 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 57817 ']' 00:05:43.720 09:50:20 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 57817 00:05:43.720 09:50:20 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:43.720 09:50:20 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.720 09:50:20 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57817 00:05:43.720 killing process with pid 57817 00:05:43.720 09:50:20 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:43.720 09:50:20 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:43.720 09:50:20 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57817' 00:05:43.720 09:50:20 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 57817 00:05:43.720 09:50:20 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 57817 00:05:43.979 [2024-10-21 09:50:20.465603] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:45.363 ************************************ 00:05:45.363 END TEST event_scheduler 00:05:45.363 ************************************ 00:05:45.363 00:05:45.363 real 0m6.877s 00:05:45.363 user 0m14.131s 00:05:45.363 sys 0m0.512s 00:05:45.363 09:50:21 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.363 09:50:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.363 09:50:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:45.363 09:50:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:45.363 09:50:21 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.363 09:50:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.363 09:50:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.363 ************************************ 00:05:45.363 START TEST app_repeat 00:05:45.363 ************************************ 00:05:45.363 09:50:21 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57945 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57945' 00:05:45.363 Process app_repeat pid: 57945 00:05:45.363 spdk_app_start Round 0 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:45.363 09:50:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57945 /var/tmp/spdk-nbd.sock 00:05:45.363 09:50:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 57945 ']' 00:05:45.363 09:50:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.363 09:50:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.363 09:50:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.363 09:50:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.364 09:50:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.364 [2024-10-21 09:50:21.717264] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:05:45.364 [2024-10-21 09:50:21.717395] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57945 ] 00:05:45.364 [2024-10-21 09:50:21.878925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.623 [2024-10-21 09:50:21.998028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.623 [2024-10-21 09:50:21.998090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.192 09:50:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.192 09:50:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:46.192 09:50:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.452 Malloc0 00:05:46.452 09:50:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.712 Malloc1 00:05:46.712 09:50:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.712 /dev/nbd0 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.712 09:50:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.712 09:50:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:46.712 09:50:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.712 09:50:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.712 09:50:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.712 09:50:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:46.712 09:50:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.712 09:50:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.712 09:50:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.712 09:50:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.712 1+0 records in 00:05:46.712 1+0 records out 00:05:46.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240342 s, 17.0 MB/s 00:05:46.712 09:50:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.973 09:50:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.973 09:50:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.973 09:50:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.973 /dev/nbd1 00:05:46.973 09:50:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.973 09:50:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.973 1+0 records in 00:05:46.973 1+0 records out 00:05:46.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406187 s, 10.1 MB/s 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.973 09:50:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.973 09:50:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.973 09:50:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.973 09:50:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.973 09:50:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.973 09:50:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.232 { 00:05:47.232 "nbd_device": "/dev/nbd0", 00:05:47.232 "bdev_name": "Malloc0" 00:05:47.232 }, 00:05:47.232 { 00:05:47.232 "nbd_device": "/dev/nbd1", 00:05:47.232 "bdev_name": "Malloc1" 00:05:47.232 } 00:05:47.232 ]' 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.232 { 00:05:47.232 "nbd_device": "/dev/nbd0", 00:05:47.232 "bdev_name": "Malloc0" 00:05:47.232 }, 00:05:47.232 { 00:05:47.232 "nbd_device": "/dev/nbd1", 00:05:47.232 "bdev_name": "Malloc1" 00:05:47.232 } 00:05:47.232 ]' 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.232 /dev/nbd1' 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.232 /dev/nbd1' 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.232 256+0 records in 00:05:47.232 256+0 records out 00:05:47.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767655 s, 137 MB/s 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.232 09:50:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.492 256+0 records in 00:05:47.492 256+0 records out 00:05:47.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019309 s, 54.3 MB/s 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.492 256+0 records in 00:05:47.492 256+0 records out 00:05:47.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252939 s, 41.5 MB/s 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.492 09:50:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.769 09:50:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.048 09:50:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.048 09:50:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.618 09:50:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.998 [2024-10-21 09:50:26.177123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.998 [2024-10-21 09:50:26.317378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.998 [2024-10-21 09:50:26.317378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.998 [2024-10-21 09:50:26.545901] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.998 [2024-10-21 09:50:26.545991] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.906 spdk_app_start Round 1 00:05:51.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.906 09:50:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.906 09:50:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:51.906 09:50:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57945 /var/tmp/spdk-nbd.sock 00:05:51.906 09:50:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 57945 ']' 00:05:51.906 09:50:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.906 09:50:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.906 09:50:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.906 09:50:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.906 09:50:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.906 09:50:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.906 09:50:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:51.906 09:50:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.906 Malloc0 00:05:51.906 09:50:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.166 Malloc1 00:05:52.166 09:50:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.166 09:50:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.425 /dev/nbd0 00:05:52.425 09:50:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.425 09:50:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.425 1+0 records in 00:05:52.425 1+0 records out 00:05:52.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320719 s, 12.8 MB/s 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.425 09:50:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.425 09:50:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.425 09:50:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.426 09:50:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.686 /dev/nbd1 00:05:52.686 09:50:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.686 09:50:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.686 1+0 records in 00:05:52.686 1+0 records out 00:05:52.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364496 s, 11.2 MB/s 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.686 09:50:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.686 09:50:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.686 09:50:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.686 09:50:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.686 09:50:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.686 09:50:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.946 { 00:05:52.946 "nbd_device": "/dev/nbd0", 00:05:52.946 "bdev_name": "Malloc0" 00:05:52.946 }, 00:05:52.946 { 00:05:52.946 "nbd_device": "/dev/nbd1", 00:05:52.946 "bdev_name": "Malloc1" 00:05:52.946 } 00:05:52.946 ]' 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.946 { 00:05:52.946 "nbd_device": "/dev/nbd0", 00:05:52.946 "bdev_name": "Malloc0" 00:05:52.946 }, 00:05:52.946 { 00:05:52.946 "nbd_device": "/dev/nbd1", 00:05:52.946 "bdev_name": "Malloc1" 00:05:52.946 } 00:05:52.946 ]' 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.946 /dev/nbd1' 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.946 /dev/nbd1' 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.946 256+0 records in 00:05:52.946 256+0 records out 00:05:52.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012171 s, 86.2 MB/s 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.946 09:50:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.206 256+0 records in 00:05:53.206 256+0 records out 00:05:53.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224954 s, 46.6 MB/s 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.206 256+0 records in 00:05:53.206 256+0 records out 00:05:53.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023002 s, 45.6 MB/s 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.206 09:50:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.467 09:50:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.467 09:50:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.467 09:50:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.467 09:50:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.467 09:50:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.467 09:50:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.467 09:50:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.467 09:50:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.467 09:50:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.467 09:50:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.467 09:50:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.727 09:50:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.727 09:50:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.295 09:50:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.677 [2024-10-21 09:50:31.911215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.677 [2024-10-21 09:50:32.046048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.677 [2024-10-21 09:50:32.046070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.936 [2024-10-21 09:50:32.276478] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.936 [2024-10-21 09:50:32.276559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.316 spdk_app_start Round 2 00:05:57.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.316 09:50:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.316 09:50:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:57.316 09:50:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57945 /var/tmp/spdk-nbd.sock 00:05:57.316 09:50:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 57945 ']' 00:05:57.316 09:50:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.316 09:50:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.316 09:50:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.316 09:50:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.316 09:50:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.316 09:50:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.316 09:50:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:57.316 09:50:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.576 Malloc0 00:05:57.836 09:50:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.096 Malloc1 00:05:58.096 09:50:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.096 09:50:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.356 /dev/nbd0 00:05:58.356 09:50:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.356 09:50:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.356 1+0 records in 00:05:58.356 1+0 records out 00:05:58.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256894 s, 15.9 MB/s 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.356 09:50:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.356 09:50:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.356 09:50:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.356 09:50:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.356 /dev/nbd1 00:05:58.616 09:50:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.616 09:50:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.616 1+0 records in 00:05:58.616 1+0 records out 00:05:58.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037404 s, 11.0 MB/s 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.616 09:50:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.616 09:50:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.616 09:50:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.616 09:50:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.616 09:50:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.616 09:50:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.616 09:50:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.616 { 00:05:58.616 "nbd_device": "/dev/nbd0", 00:05:58.616 "bdev_name": "Malloc0" 00:05:58.616 }, 00:05:58.616 { 00:05:58.616 "nbd_device": "/dev/nbd1", 00:05:58.616 "bdev_name": "Malloc1" 00:05:58.616 } 00:05:58.616 ]' 00:05:58.616 09:50:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.616 { 00:05:58.616 "nbd_device": "/dev/nbd0", 00:05:58.616 "bdev_name": "Malloc0" 00:05:58.616 }, 00:05:58.616 { 00:05:58.616 "nbd_device": "/dev/nbd1", 00:05:58.616 "bdev_name": "Malloc1" 00:05:58.616 } 00:05:58.616 ]' 00:05:58.616 09:50:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.876 /dev/nbd1' 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.876 /dev/nbd1' 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.876 256+0 records in 00:05:58.876 256+0 records out 00:05:58.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00574111 s, 183 MB/s 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.876 256+0 records in 00:05:58.876 256+0 records out 00:05:58.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286497 s, 36.6 MB/s 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.876 256+0 records in 00:05:58.876 256+0 records out 00:05:58.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027865 s, 37.6 MB/s 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.876 09:50:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.136 09:50:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.136 09:50:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.136 09:50:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.136 09:50:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.136 09:50:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.136 09:50:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.136 09:50:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.136 09:50:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.136 09:50:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.136 09:50:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.396 09:50:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.655 09:50:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.655 09:50:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.225 09:50:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.164 [2024-10-21 09:50:37.740153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.424 [2024-10-21 09:50:37.880475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.424 [2024-10-21 09:50:37.880477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.684 [2024-10-21 09:50:38.117081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.684 [2024-10-21 09:50:38.117420] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.098 09:50:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57945 /var/tmp/spdk-nbd.sock 00:06:03.098 09:50:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 57945 ']' 00:06:03.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.098 09:50:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.098 09:50:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.099 09:50:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.099 09:50:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.099 09:50:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:03.358 09:50:39 event.app_repeat -- event/event.sh@39 -- # killprocess 57945 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 57945 ']' 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 57945 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57945 00:06:03.358 killing process with pid 57945 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57945' 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@969 -- # kill 57945 00:06:03.358 09:50:39 event.app_repeat -- common/autotest_common.sh@974 -- # wait 57945 00:06:04.295 spdk_app_start is called in Round 0. 00:06:04.295 Shutdown signal received, stop current app iteration 00:06:04.295 Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 reinitialization... 00:06:04.295 spdk_app_start is called in Round 1. 00:06:04.295 Shutdown signal received, stop current app iteration 00:06:04.295 Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 reinitialization... 00:06:04.295 spdk_app_start is called in Round 2. 00:06:04.295 Shutdown signal received, stop current app iteration 00:06:04.295 Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 reinitialization... 00:06:04.295 spdk_app_start is called in Round 3. 00:06:04.295 Shutdown signal received, stop current app iteration 00:06:04.555 09:50:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:04.555 ************************************ 00:06:04.555 END TEST app_repeat 00:06:04.555 ************************************ 00:06:04.555 09:50:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:04.555 00:06:04.555 real 0m19.258s 00:06:04.555 user 0m40.779s 00:06:04.555 sys 0m2.754s 00:06:04.555 09:50:40 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.555 09:50:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.555 09:50:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:04.555 09:50:40 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:04.555 09:50:40 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.555 09:50:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.555 09:50:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.555 ************************************ 00:06:04.555 START TEST cpu_locks 00:06:04.555 ************************************ 00:06:04.555 09:50:40 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:04.555 * Looking for test storage... 00:06:04.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:04.555 09:50:41 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:04.555 09:50:41 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:04.555 09:50:41 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:04.814 09:50:41 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.814 09:50:41 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:04.814 09:50:41 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.814 09:50:41 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:04.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.814 --rc genhtml_branch_coverage=1 00:06:04.814 --rc genhtml_function_coverage=1 00:06:04.814 --rc genhtml_legend=1 00:06:04.814 --rc geninfo_all_blocks=1 00:06:04.814 --rc geninfo_unexecuted_blocks=1 00:06:04.814 00:06:04.814 ' 00:06:04.814 09:50:41 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:04.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.814 --rc genhtml_branch_coverage=1 00:06:04.814 --rc genhtml_function_coverage=1 00:06:04.814 --rc genhtml_legend=1 00:06:04.814 --rc geninfo_all_blocks=1 00:06:04.814 --rc geninfo_unexecuted_blocks=1 00:06:04.814 00:06:04.814 ' 00:06:04.814 09:50:41 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:04.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.814 --rc genhtml_branch_coverage=1 00:06:04.814 --rc genhtml_function_coverage=1 00:06:04.814 --rc genhtml_legend=1 00:06:04.814 --rc geninfo_all_blocks=1 00:06:04.814 --rc geninfo_unexecuted_blocks=1 00:06:04.814 00:06:04.814 ' 00:06:04.814 09:50:41 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:04.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.815 --rc genhtml_branch_coverage=1 00:06:04.815 --rc genhtml_function_coverage=1 00:06:04.815 --rc genhtml_legend=1 00:06:04.815 --rc geninfo_all_blocks=1 00:06:04.815 --rc geninfo_unexecuted_blocks=1 00:06:04.815 00:06:04.815 ' 00:06:04.815 09:50:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.815 09:50:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.815 09:50:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.815 09:50:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.815 09:50:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.815 09:50:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.815 09:50:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.815 ************************************ 00:06:04.815 START TEST default_locks 00:06:04.815 ************************************ 00:06:04.815 09:50:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:04.815 09:50:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58387 00:06:04.815 09:50:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.815 09:50:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58387 00:06:04.815 09:50:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58387 ']' 00:06:04.815 09:50:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.815 09:50:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.815 09:50:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.815 09:50:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.815 09:50:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.815 [2024-10-21 09:50:41.311715] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:04.815 [2024-10-21 09:50:41.311822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58387 ] 00:06:05.073 [2024-10-21 09:50:41.474861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.073 [2024-10-21 09:50:41.621277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.451 09:50:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.451 09:50:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:06.451 09:50:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58387 00:06:06.451 09:50:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58387 00:06:06.451 09:50:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58387 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58387 ']' 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58387 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58387 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.712 killing process with pid 58387 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58387' 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58387 00:06:06.712 09:50:43 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58387 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58387 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58387 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58387 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58387 ']' 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.249 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58387) - No such process 00:06:09.249 ERROR: process (pid: 58387) is no longer running 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.249 00:06:09.249 real 0m4.544s 00:06:09.249 user 0m4.338s 00:06:09.249 sys 0m0.801s 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.249 09:50:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.249 ************************************ 00:06:09.249 END TEST default_locks 00:06:09.249 ************************************ 00:06:09.249 09:50:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.249 09:50:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.249 09:50:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.249 09:50:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.249 ************************************ 00:06:09.249 START TEST default_locks_via_rpc 00:06:09.249 ************************************ 00:06:09.249 09:50:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:09.249 09:50:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58462 00:06:09.249 09:50:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.249 09:50:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58462 00:06:09.249 09:50:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58462 ']' 00:06:09.249 09:50:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.249 09:50:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.249 09:50:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.249 09:50:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.249 09:50:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.508 [2024-10-21 09:50:45.918610] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:09.508 [2024-10-21 09:50:45.918737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58462 ] 00:06:09.508 [2024-10-21 09:50:46.079017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.768 [2024-10-21 09:50:46.223923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.706 09:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58462 00:06:10.966 09:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.966 09:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58462 00:06:10.966 09:50:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58462 00:06:10.966 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58462 ']' 00:06:10.966 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58462 00:06:10.966 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:10.966 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.966 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58462 00:06:11.225 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.225 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.225 killing process with pid 58462 00:06:11.225 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58462' 00:06:11.225 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58462 00:06:11.225 09:50:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58462 00:06:13.765 00:06:13.765 real 0m4.312s 00:06:13.765 user 0m4.084s 00:06:13.765 sys 0m0.727s 00:06:13.765 09:50:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.765 09:50:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.765 ************************************ 00:06:13.765 END TEST default_locks_via_rpc 00:06:13.765 ************************************ 00:06:13.765 09:50:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:13.765 09:50:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.765 09:50:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.766 09:50:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.766 ************************************ 00:06:13.766 START TEST non_locking_app_on_locked_coremask 00:06:13.766 ************************************ 00:06:13.766 09:50:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:13.766 09:50:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58541 00:06:13.766 09:50:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58541 /var/tmp/spdk.sock 00:06:13.766 09:50:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.766 09:50:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58541 ']' 00:06:13.766 09:50:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.766 09:50:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.766 09:50:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.766 09:50:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.766 09:50:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.766 [2024-10-21 09:50:50.287068] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:13.766 [2024-10-21 09:50:50.287192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58541 ] 00:06:14.026 [2024-10-21 09:50:50.448291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.026 [2024-10-21 09:50:50.595630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58563 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58563 /var/tmp/spdk2.sock 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58563 ']' 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.405 09:50:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.405 [2024-10-21 09:50:51.741439] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:15.405 [2024-10-21 09:50:51.741573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58563 ] 00:06:15.405 [2024-10-21 09:50:51.895174] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.405 [2024-10-21 09:50:51.895239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.664 [2024-10-21 09:50:52.176051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58541 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58541 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58541 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58541 ']' 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58541 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58541 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.200 killing process with pid 58541 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58541' 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58541 00:06:18.200 09:50:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58541 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58563 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58563 ']' 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58563 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58563 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58563' 00:06:23.479 killing process with pid 58563 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58563 00:06:23.479 09:50:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58563 00:06:26.019 00:06:26.019 real 0m12.402s 00:06:26.019 user 0m12.312s 00:06:26.019 sys 0m1.509s 00:06:26.019 09:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.019 09:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.019 ************************************ 00:06:26.019 END TEST non_locking_app_on_locked_coremask 00:06:26.019 ************************************ 00:06:26.278 09:51:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:26.278 09:51:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.278 09:51:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.278 09:51:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.278 ************************************ 00:06:26.278 START TEST locking_app_on_unlocked_coremask 00:06:26.278 ************************************ 00:06:26.278 09:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:26.278 09:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58714 00:06:26.278 09:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:26.278 09:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58714 /var/tmp/spdk.sock 00:06:26.278 09:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58714 ']' 00:06:26.278 09:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.278 09:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.278 09:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.278 09:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.278 09:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.279 [2024-10-21 09:51:02.773131] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:26.279 [2024-10-21 09:51:02.773279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58714 ] 00:06:26.538 [2024-10-21 09:51:02.939767] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.538 [2024-10-21 09:51:02.939846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.538 [2024-10-21 09:51:03.087603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58741 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58741 /var/tmp/spdk2.sock 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58741 ']' 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.924 09:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.924 [2024-10-21 09:51:04.237610] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:27.924 [2024-10-21 09:51:04.237733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58741 ] 00:06:27.924 [2024-10-21 09:51:04.394684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.195 [2024-10-21 09:51:04.694627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.729 09:51:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.729 09:51:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:30.729 09:51:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58741 00:06:30.729 09:51:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58741 00:06:30.729 09:51:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58714 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58714 ']' 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58714 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58714 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58714' 00:06:30.729 killing process with pid 58714 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58714 00:06:30.729 09:51:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58714 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58741 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58741 ']' 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58741 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58741 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.004 killing process with pid 58741 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58741' 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58741 00:06:36.004 09:51:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58741 00:06:38.544 00:06:38.544 real 0m12.183s 00:06:38.544 user 0m12.076s 00:06:38.544 sys 0m1.471s 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.544 ************************************ 00:06:38.544 END TEST locking_app_on_unlocked_coremask 00:06:38.544 ************************************ 00:06:38.544 09:51:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:38.544 09:51:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.544 09:51:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.544 09:51:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.544 ************************************ 00:06:38.544 START TEST locking_app_on_locked_coremask 00:06:38.544 ************************************ 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58891 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58891 /var/tmp/spdk.sock 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58891 ']' 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.544 09:51:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.544 [2024-10-21 09:51:15.021577] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:38.544 [2024-10-21 09:51:15.021710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:06:38.804 [2024-10-21 09:51:15.187786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.804 [2024-10-21 09:51:15.333062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58913 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58913 /var/tmp/spdk2.sock 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58913 /var/tmp/spdk2.sock 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58913 /var/tmp/spdk2.sock 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58913 ']' 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.182 09:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.182 [2024-10-21 09:51:16.464087] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:40.182 [2024-10-21 09:51:16.464530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58913 ] 00:06:40.182 [2024-10-21 09:51:16.613374] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58891 has claimed it. 00:06:40.182 [2024-10-21 09:51:16.613444] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.749 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58913) - No such process 00:06:40.749 ERROR: process (pid: 58913) is no longer running 00:06:40.749 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.749 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:40.749 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:40.749 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.749 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.749 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.749 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58891 00:06:40.749 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58891 00:06:40.749 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58891 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58891 ']' 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58891 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58891 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.009 killing process with pid 58891 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58891' 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58891 00:06:41.009 09:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58891 00:06:43.543 00:06:43.543 real 0m5.052s 00:06:43.543 user 0m5.016s 00:06:43.543 sys 0m0.891s 00:06:43.543 09:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.543 09:51:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.543 ************************************ 00:06:43.543 END TEST locking_app_on_locked_coremask 00:06:43.543 ************************************ 00:06:43.543 09:51:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:43.543 09:51:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.543 09:51:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.543 09:51:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.543 ************************************ 00:06:43.543 START TEST locking_overlapped_coremask 00:06:43.543 ************************************ 00:06:43.543 09:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:43.543 09:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58982 00:06:43.543 09:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:43.543 09:51:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58982 /var/tmp/spdk.sock 00:06:43.543 09:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 58982 ']' 00:06:43.543 09:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.543 09:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.543 09:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.543 09:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.543 09:51:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.544 [2024-10-21 09:51:20.124091] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:43.544 [2024-10-21 09:51:20.124206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58982 ] 00:06:43.802 [2024-10-21 09:51:20.286412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.061 [2024-10-21 09:51:20.441994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.061 [2024-10-21 09:51:20.442049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.061 [2024-10-21 09:51:20.442095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59006 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59006 /var/tmp/spdk2.sock 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59006 /var/tmp/spdk2.sock 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59006 /var/tmp/spdk2.sock 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59006 ']' 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.998 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.999 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.999 09:51:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.999 [2024-10-21 09:51:21.587083] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:44.999 [2024-10-21 09:51:21.587276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59006 ] 00:06:45.258 [2024-10-21 09:51:21.738802] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58982 has claimed it. 00:06:45.258 [2024-10-21 09:51:21.738872] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:45.825 ERROR: process (pid: 59006) is no longer running 00:06:45.825 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59006) - No such process 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58982 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 58982 ']' 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 58982 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:45.825 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.826 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58982 00:06:45.826 killing process with pid 58982 00:06:45.826 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.826 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.826 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58982' 00:06:45.826 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 58982 00:06:45.826 09:51:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 58982 00:06:48.360 00:06:48.360 real 0m4.802s 00:06:48.360 user 0m12.840s 00:06:48.360 sys 0m0.746s 00:06:48.360 ************************************ 00:06:48.360 END TEST locking_overlapped_coremask 00:06:48.360 ************************************ 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.360 09:51:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:48.360 09:51:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.360 09:51:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.360 09:51:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.360 ************************************ 00:06:48.360 START TEST locking_overlapped_coremask_via_rpc 00:06:48.360 ************************************ 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59070 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59070 /var/tmp/spdk.sock 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59070 ']' 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.360 09:51:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.619 [2024-10-21 09:51:24.992264] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:48.619 [2024-10-21 09:51:24.992473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59070 ] 00:06:48.619 [2024-10-21 09:51:25.155230] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.619 [2024-10-21 09:51:25.155401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.878 [2024-10-21 09:51:25.305106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.878 [2024-10-21 09:51:25.305256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.878 [2024-10-21 09:51:25.305288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59093 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59093 /var/tmp/spdk2.sock 00:06:49.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59093 ']' 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.814 09:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.073 [2024-10-21 09:51:26.451617] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:50.073 [2024-10-21 09:51:26.451789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59093 ] 00:06:50.073 [2024-10-21 09:51:26.607162] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.073 [2024-10-21 09:51:26.607206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.332 [2024-10-21 09:51:26.850304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.332 [2024-10-21 09:51:26.853754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.332 [2024-10-21 09:51:26.853791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.874 09:51:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.874 [2024-10-21 09:51:29.005770] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59070 has claimed it. 00:06:52.874 request: 00:06:52.874 { 00:06:52.874 "method": "framework_enable_cpumask_locks", 00:06:52.874 "req_id": 1 00:06:52.874 } 00:06:52.874 Got JSON-RPC error response 00:06:52.874 response: 00:06:52.874 { 00:06:52.874 "code": -32603, 00:06:52.874 "message": "Failed to claim CPU core: 2" 00:06:52.874 } 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59070 /var/tmp/spdk.sock 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59070 ']' 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59093 /var/tmp/spdk2.sock 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59093 ']' 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.874 00:06:52.874 real 0m4.544s 00:06:52.874 user 0m1.250s 00:06:52.874 sys 0m0.190s 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.874 09:51:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.874 ************************************ 00:06:52.874 END TEST locking_overlapped_coremask_via_rpc 00:06:52.874 ************************************ 00:06:53.135 09:51:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.135 09:51:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59070 ]] 00:06:53.135 09:51:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59070 00:06:53.135 09:51:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59070 ']' 00:06:53.135 09:51:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59070 00:06:53.135 09:51:29 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:53.135 09:51:29 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.135 09:51:29 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59070 00:06:53.135 09:51:29 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.135 09:51:29 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.135 09:51:29 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59070' 00:06:53.135 killing process with pid 59070 00:06:53.135 09:51:29 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59070 00:06:53.135 09:51:29 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59070 00:06:55.672 09:51:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59093 ]] 00:06:55.672 09:51:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59093 00:06:55.672 09:51:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59093 ']' 00:06:55.672 09:51:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59093 00:06:55.672 09:51:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:55.672 09:51:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.672 09:51:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59093 00:06:55.672 killing process with pid 59093 00:06:55.672 09:51:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:55.672 09:51:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:55.672 09:51:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59093' 00:06:55.672 09:51:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59093 00:06:55.672 09:51:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59093 00:06:58.210 09:51:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.210 09:51:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.210 09:51:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59070 ]] 00:06:58.210 09:51:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59070 00:06:58.210 09:51:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59070 ']' 00:06:58.210 09:51:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59070 00:06:58.210 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59070) - No such process 00:06:58.210 Process with pid 59070 is not found 00:06:58.210 Process with pid 59093 is not found 00:06:58.210 09:51:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59070 is not found' 00:06:58.210 09:51:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59093 ]] 00:06:58.210 09:51:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59093 00:06:58.210 09:51:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59093 ']' 00:06:58.210 09:51:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59093 00:06:58.210 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59093) - No such process 00:06:58.210 09:51:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59093 is not found' 00:06:58.210 09:51:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.210 00:06:58.210 real 0m53.580s 00:06:58.210 user 1m28.983s 00:06:58.210 sys 0m7.746s 00:06:58.210 09:51:34 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.210 09:51:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.210 ************************************ 00:06:58.210 END TEST cpu_locks 00:06:58.210 ************************************ 00:06:58.210 00:06:58.210 real 1m25.236s 00:06:58.210 user 2m31.349s 00:06:58.210 sys 0m11.738s 00:06:58.210 09:51:34 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.210 09:51:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.210 ************************************ 00:06:58.210 END TEST event 00:06:58.210 ************************************ 00:06:58.210 09:51:34 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.210 09:51:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.210 09:51:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.210 09:51:34 -- common/autotest_common.sh@10 -- # set +x 00:06:58.210 ************************************ 00:06:58.210 START TEST thread 00:06:58.210 ************************************ 00:06:58.210 09:51:34 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.210 * Looking for test storage... 00:06:58.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:58.470 09:51:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.470 09:51:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.470 09:51:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.470 09:51:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.470 09:51:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.470 09:51:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.470 09:51:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.470 09:51:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.470 09:51:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.470 09:51:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.470 09:51:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.470 09:51:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:58.470 09:51:34 thread -- scripts/common.sh@345 -- # : 1 00:06:58.470 09:51:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.470 09:51:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.470 09:51:34 thread -- scripts/common.sh@365 -- # decimal 1 00:06:58.470 09:51:34 thread -- scripts/common.sh@353 -- # local d=1 00:06:58.470 09:51:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.470 09:51:34 thread -- scripts/common.sh@355 -- # echo 1 00:06:58.470 09:51:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.470 09:51:34 thread -- scripts/common.sh@366 -- # decimal 2 00:06:58.470 09:51:34 thread -- scripts/common.sh@353 -- # local d=2 00:06:58.470 09:51:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.470 09:51:34 thread -- scripts/common.sh@355 -- # echo 2 00:06:58.470 09:51:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.470 09:51:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.470 09:51:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.470 09:51:34 thread -- scripts/common.sh@368 -- # return 0 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:58.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.470 --rc genhtml_branch_coverage=1 00:06:58.470 --rc genhtml_function_coverage=1 00:06:58.470 --rc genhtml_legend=1 00:06:58.470 --rc geninfo_all_blocks=1 00:06:58.470 --rc geninfo_unexecuted_blocks=1 00:06:58.470 00:06:58.470 ' 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:58.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.470 --rc genhtml_branch_coverage=1 00:06:58.470 --rc genhtml_function_coverage=1 00:06:58.470 --rc genhtml_legend=1 00:06:58.470 --rc geninfo_all_blocks=1 00:06:58.470 --rc geninfo_unexecuted_blocks=1 00:06:58.470 00:06:58.470 ' 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:58.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.470 --rc genhtml_branch_coverage=1 00:06:58.470 --rc genhtml_function_coverage=1 00:06:58.470 --rc genhtml_legend=1 00:06:58.470 --rc geninfo_all_blocks=1 00:06:58.470 --rc geninfo_unexecuted_blocks=1 00:06:58.470 00:06:58.470 ' 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:58.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.470 --rc genhtml_branch_coverage=1 00:06:58.470 --rc genhtml_function_coverage=1 00:06:58.470 --rc genhtml_legend=1 00:06:58.470 --rc geninfo_all_blocks=1 00:06:58.470 --rc geninfo_unexecuted_blocks=1 00:06:58.470 00:06:58.470 ' 00:06:58.470 09:51:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.470 09:51:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.470 ************************************ 00:06:58.470 START TEST thread_poller_perf 00:06:58.470 ************************************ 00:06:58.470 09:51:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.470 [2024-10-21 09:51:34.968691] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:06:58.470 [2024-10-21 09:51:34.968833] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59294 ] 00:06:58.729 [2024-10-21 09:51:35.125387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.729 [2024-10-21 09:51:35.265468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.729 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.109 [2024-10-21T09:51:36.704Z] ====================================== 00:07:00.109 [2024-10-21T09:51:36.704Z] busy:2299796118 (cyc) 00:07:00.109 [2024-10-21T09:51:36.704Z] total_run_count: 417000 00:07:00.109 [2024-10-21T09:51:36.704Z] tsc_hz: 2290000000 (cyc) 00:07:00.109 [2024-10-21T09:51:36.704Z] ====================================== 00:07:00.109 [2024-10-21T09:51:36.704Z] poller_cost: 5515 (cyc), 2408 (nsec) 00:07:00.109 00:07:00.109 real 0m1.619s 00:07:00.109 user 0m1.405s 00:07:00.109 sys 0m0.107s 00:07:00.109 09:51:36 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.109 09:51:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.109 ************************************ 00:07:00.109 END TEST thread_poller_perf 00:07:00.109 ************************************ 00:07:00.109 09:51:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.109 09:51:36 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:00.109 09:51:36 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.109 09:51:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.109 ************************************ 00:07:00.109 START TEST thread_poller_perf 00:07:00.109 ************************************ 00:07:00.109 09:51:36 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.109 [2024-10-21 09:51:36.655025] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:00.109 [2024-10-21 09:51:36.655191] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59325 ] 00:07:00.368 [2024-10-21 09:51:36.817417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.627 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:00.627 [2024-10-21 09:51:36.965660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.012 [2024-10-21T09:51:38.607Z] ====================================== 00:07:02.012 [2024-10-21T09:51:38.607Z] busy:2293440358 (cyc) 00:07:02.012 [2024-10-21T09:51:38.607Z] total_run_count: 5427000 00:07:02.012 [2024-10-21T09:51:38.607Z] tsc_hz: 2290000000 (cyc) 00:07:02.012 [2024-10-21T09:51:38.607Z] ====================================== 00:07:02.012 [2024-10-21T09:51:38.607Z] poller_cost: 422 (cyc), 184 (nsec) 00:07:02.012 00:07:02.012 real 0m1.615s 00:07:02.012 user 0m1.402s 00:07:02.012 sys 0m0.106s 00:07:02.012 09:51:38 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.012 ************************************ 00:07:02.012 END TEST thread_poller_perf 00:07:02.012 ************************************ 00:07:02.012 09:51:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.012 09:51:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:02.012 00:07:02.012 real 0m3.594s 00:07:02.012 user 0m2.958s 00:07:02.012 sys 0m0.434s 00:07:02.012 ************************************ 00:07:02.012 END TEST thread 00:07:02.013 ************************************ 00:07:02.013 09:51:38 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.013 09:51:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.013 09:51:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:02.013 09:51:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:02.013 09:51:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.013 09:51:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.013 09:51:38 -- common/autotest_common.sh@10 -- # set +x 00:07:02.013 ************************************ 00:07:02.013 START TEST app_cmdline 00:07:02.013 ************************************ 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:02.013 * Looking for test storage... 00:07:02.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.013 09:51:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.013 --rc genhtml_branch_coverage=1 00:07:02.013 --rc genhtml_function_coverage=1 00:07:02.013 --rc genhtml_legend=1 00:07:02.013 --rc geninfo_all_blocks=1 00:07:02.013 --rc geninfo_unexecuted_blocks=1 00:07:02.013 00:07:02.013 ' 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.013 --rc genhtml_branch_coverage=1 00:07:02.013 --rc genhtml_function_coverage=1 00:07:02.013 --rc genhtml_legend=1 00:07:02.013 --rc geninfo_all_blocks=1 00:07:02.013 --rc geninfo_unexecuted_blocks=1 00:07:02.013 00:07:02.013 ' 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.013 --rc genhtml_branch_coverage=1 00:07:02.013 --rc genhtml_function_coverage=1 00:07:02.013 --rc genhtml_legend=1 00:07:02.013 --rc geninfo_all_blocks=1 00:07:02.013 --rc geninfo_unexecuted_blocks=1 00:07:02.013 00:07:02.013 ' 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.013 --rc genhtml_branch_coverage=1 00:07:02.013 --rc genhtml_function_coverage=1 00:07:02.013 --rc genhtml_legend=1 00:07:02.013 --rc geninfo_all_blocks=1 00:07:02.013 --rc geninfo_unexecuted_blocks=1 00:07:02.013 00:07:02.013 ' 00:07:02.013 09:51:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:02.013 09:51:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59414 00:07:02.013 09:51:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:02.013 09:51:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59414 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59414 ']' 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.013 09:51:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.273 [2024-10-21 09:51:38.656057] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:02.273 [2024-10-21 09:51:38.656671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59414 ] 00:07:02.273 [2024-10-21 09:51:38.817311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.532 [2024-10-21 09:51:38.961099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.468 09:51:40 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.468 09:51:40 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:03.468 09:51:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:03.726 { 00:07:03.726 "version": "SPDK v25.01-pre git sha1 1042d663d", 00:07:03.726 "fields": { 00:07:03.726 "major": 25, 00:07:03.726 "minor": 1, 00:07:03.726 "patch": 0, 00:07:03.726 "suffix": "-pre", 00:07:03.726 "commit": "1042d663d" 00:07:03.726 } 00:07:03.726 } 00:07:03.726 09:51:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:03.726 09:51:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:03.726 09:51:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:03.726 09:51:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:03.726 09:51:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:03.726 09:51:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:03.726 09:51:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:03.726 09:51:40 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.726 09:51:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.726 09:51:40 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.726 09:51:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:03.726 09:51:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:03.726 09:51:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.726 09:51:40 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:03.726 09:51:40 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.726 09:51:40 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.727 09:51:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.727 09:51:40 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.727 09:51:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.727 09:51:40 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.727 09:51:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.727 09:51:40 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.727 09:51:40 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:03.727 09:51:40 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.986 request: 00:07:03.986 { 00:07:03.986 "method": "env_dpdk_get_mem_stats", 00:07:03.986 "req_id": 1 00:07:03.986 } 00:07:03.986 Got JSON-RPC error response 00:07:03.986 response: 00:07:03.986 { 00:07:03.986 "code": -32601, 00:07:03.986 "message": "Method not found" 00:07:03.986 } 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.986 09:51:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59414 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59414 ']' 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59414 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59414 00:07:03.986 killing process with pid 59414 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59414' 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@969 -- # kill 59414 00:07:03.986 09:51:40 app_cmdline -- common/autotest_common.sh@974 -- # wait 59414 00:07:06.524 00:07:06.524 real 0m4.713s 00:07:06.524 user 0m4.719s 00:07:06.524 sys 0m0.749s 00:07:06.524 09:51:43 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.524 ************************************ 00:07:06.524 END TEST app_cmdline 00:07:06.524 ************************************ 00:07:06.524 09:51:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.524 09:51:43 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:06.524 09:51:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.524 09:51:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.524 09:51:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.524 ************************************ 00:07:06.524 START TEST version 00:07:06.524 ************************************ 00:07:06.524 09:51:43 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:06.783 * Looking for test storage... 00:07:06.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:06.783 09:51:43 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:06.783 09:51:43 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:06.783 09:51:43 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:06.783 09:51:43 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:06.783 09:51:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.783 09:51:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.783 09:51:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.783 09:51:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.783 09:51:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.783 09:51:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.783 09:51:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.783 09:51:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.783 09:51:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.783 09:51:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.783 09:51:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.783 09:51:43 version -- scripts/common.sh@344 -- # case "$op" in 00:07:06.783 09:51:43 version -- scripts/common.sh@345 -- # : 1 00:07:06.783 09:51:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.783 09:51:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.783 09:51:43 version -- scripts/common.sh@365 -- # decimal 1 00:07:06.783 09:51:43 version -- scripts/common.sh@353 -- # local d=1 00:07:06.783 09:51:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.783 09:51:43 version -- scripts/common.sh@355 -- # echo 1 00:07:06.783 09:51:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.783 09:51:43 version -- scripts/common.sh@366 -- # decimal 2 00:07:06.783 09:51:43 version -- scripts/common.sh@353 -- # local d=2 00:07:06.783 09:51:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.783 09:51:43 version -- scripts/common.sh@355 -- # echo 2 00:07:06.784 09:51:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.784 09:51:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.784 09:51:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.784 09:51:43 version -- scripts/common.sh@368 -- # return 0 00:07:06.784 09:51:43 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.784 09:51:43 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:06.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.784 --rc genhtml_branch_coverage=1 00:07:06.784 --rc genhtml_function_coverage=1 00:07:06.784 --rc genhtml_legend=1 00:07:06.784 --rc geninfo_all_blocks=1 00:07:06.784 --rc geninfo_unexecuted_blocks=1 00:07:06.784 00:07:06.784 ' 00:07:06.784 09:51:43 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:06.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.784 --rc genhtml_branch_coverage=1 00:07:06.784 --rc genhtml_function_coverage=1 00:07:06.784 --rc genhtml_legend=1 00:07:06.784 --rc geninfo_all_blocks=1 00:07:06.784 --rc geninfo_unexecuted_blocks=1 00:07:06.784 00:07:06.784 ' 00:07:06.784 09:51:43 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:06.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.784 --rc genhtml_branch_coverage=1 00:07:06.784 --rc genhtml_function_coverage=1 00:07:06.784 --rc genhtml_legend=1 00:07:06.784 --rc geninfo_all_blocks=1 00:07:06.784 --rc geninfo_unexecuted_blocks=1 00:07:06.784 00:07:06.784 ' 00:07:06.784 09:51:43 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:06.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.784 --rc genhtml_branch_coverage=1 00:07:06.784 --rc genhtml_function_coverage=1 00:07:06.784 --rc genhtml_legend=1 00:07:06.784 --rc geninfo_all_blocks=1 00:07:06.784 --rc geninfo_unexecuted_blocks=1 00:07:06.784 00:07:06.784 ' 00:07:06.784 09:51:43 version -- app/version.sh@17 -- # get_header_version major 00:07:06.784 09:51:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:06.784 09:51:43 version -- app/version.sh@14 -- # cut -f2 00:07:06.784 09:51:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.784 09:51:43 version -- app/version.sh@17 -- # major=25 00:07:06.784 09:51:43 version -- app/version.sh@18 -- # get_header_version minor 00:07:06.784 09:51:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:06.784 09:51:43 version -- app/version.sh@14 -- # cut -f2 00:07:06.784 09:51:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.784 09:51:43 version -- app/version.sh@18 -- # minor=1 00:07:06.784 09:51:43 version -- app/version.sh@19 -- # get_header_version patch 00:07:06.784 09:51:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:06.784 09:51:43 version -- app/version.sh@14 -- # cut -f2 00:07:06.784 09:51:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.784 09:51:43 version -- app/version.sh@19 -- # patch=0 00:07:06.784 09:51:43 version -- app/version.sh@20 -- # get_header_version suffix 00:07:06.784 09:51:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:06.784 09:51:43 version -- app/version.sh@14 -- # cut -f2 00:07:06.784 09:51:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.044 09:51:43 version -- app/version.sh@20 -- # suffix=-pre 00:07:07.044 09:51:43 version -- app/version.sh@22 -- # version=25.1 00:07:07.044 09:51:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:07.044 09:51:43 version -- app/version.sh@28 -- # version=25.1rc0 00:07:07.044 09:51:43 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:07.044 09:51:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:07.044 09:51:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:07.044 09:51:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:07.044 ************************************ 00:07:07.044 END TEST version 00:07:07.044 ************************************ 00:07:07.044 00:07:07.044 real 0m0.314s 00:07:07.044 user 0m0.175s 00:07:07.044 sys 0m0.195s 00:07:07.044 09:51:43 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.044 09:51:43 version -- common/autotest_common.sh@10 -- # set +x 00:07:07.044 09:51:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:07.044 09:51:43 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:07.044 09:51:43 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:07.044 09:51:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.044 09:51:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.044 09:51:43 -- common/autotest_common.sh@10 -- # set +x 00:07:07.044 ************************************ 00:07:07.044 START TEST bdev_raid 00:07:07.044 ************************************ 00:07:07.044 09:51:43 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:07.044 * Looking for test storage... 00:07:07.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:07.044 09:51:43 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:07.044 09:51:43 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:07:07.044 09:51:43 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:07.305 09:51:43 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.305 09:51:43 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:07.305 09:51:43 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.305 09:51:43 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:07.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.305 --rc genhtml_branch_coverage=1 00:07:07.305 --rc genhtml_function_coverage=1 00:07:07.305 --rc genhtml_legend=1 00:07:07.305 --rc geninfo_all_blocks=1 00:07:07.305 --rc geninfo_unexecuted_blocks=1 00:07:07.305 00:07:07.305 ' 00:07:07.305 09:51:43 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:07.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.305 --rc genhtml_branch_coverage=1 00:07:07.305 --rc genhtml_function_coverage=1 00:07:07.305 --rc genhtml_legend=1 00:07:07.305 --rc geninfo_all_blocks=1 00:07:07.305 --rc geninfo_unexecuted_blocks=1 00:07:07.305 00:07:07.305 ' 00:07:07.305 09:51:43 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:07.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.305 --rc genhtml_branch_coverage=1 00:07:07.305 --rc genhtml_function_coverage=1 00:07:07.305 --rc genhtml_legend=1 00:07:07.305 --rc geninfo_all_blocks=1 00:07:07.305 --rc geninfo_unexecuted_blocks=1 00:07:07.305 00:07:07.305 ' 00:07:07.305 09:51:43 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:07.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.305 --rc genhtml_branch_coverage=1 00:07:07.305 --rc genhtml_function_coverage=1 00:07:07.305 --rc genhtml_legend=1 00:07:07.305 --rc geninfo_all_blocks=1 00:07:07.305 --rc geninfo_unexecuted_blocks=1 00:07:07.305 00:07:07.305 ' 00:07:07.305 09:51:43 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:07.305 09:51:43 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:07.305 09:51:43 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:07.305 09:51:43 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:07.305 09:51:43 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:07.305 09:51:43 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:07.305 09:51:43 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:07.305 09:51:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.305 09:51:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.305 09:51:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.305 ************************************ 00:07:07.305 START TEST raid1_resize_data_offset_test 00:07:07.305 ************************************ 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:07:07.305 Process raid pid: 59607 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59607 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59607' 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59607 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 59607 ']' 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.305 09:51:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.305 [2024-10-21 09:51:43.832528] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:07.306 [2024-10-21 09:51:43.832754] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.565 [2024-10-21 09:51:43.996397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.565 [2024-10-21 09:51:44.141970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.825 [2024-10-21 09:51:44.396627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.825 [2024-10-21 09:51:44.396755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.085 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.085 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:07:08.085 09:51:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:08.085 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.085 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.344 malloc0 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.344 malloc1 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.344 null0 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.344 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.344 [2024-10-21 09:51:44.870195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:08.344 [2024-10-21 09:51:44.872311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:08.344 [2024-10-21 09:51:44.872355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:08.344 [2024-10-21 09:51:44.872499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:07:08.344 [2024-10-21 09:51:44.872510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:08.344 [2024-10-21 09:51:44.872764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:07:08.344 [2024-10-21 09:51:44.872932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:07:08.344 [2024-10-21 09:51:44.872946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000005b80 00:07:08.344 [2024-10-21 09:51:44.873103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.345 [2024-10-21 09:51:44.930178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.345 09:51:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.285 malloc2 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.285 [2024-10-21 09:51:45.594104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:09.285 [2024-10-21 09:51:45.613793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.285 [2024-10-21 09:51:45.615849] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59607 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 59607 ']' 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 59607 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59607 00:07:09.285 killing process with pid 59607 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59607' 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 59607 00:07:09.285 09:51:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 59607 00:07:09.285 [2024-10-21 09:51:45.709409] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.285 [2024-10-21 09:51:45.709739] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:09.285 [2024-10-21 09:51:45.709808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.285 [2024-10-21 09:51:45.709827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:09.285 [2024-10-21 09:51:45.745680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.285 [2024-10-21 09:51:45.746026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.285 [2024-10-21 09:51:45.746054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Raid, state offline 00:07:11.190 [2024-10-21 09:51:47.676002] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.565 09:51:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:12.565 00:07:12.565 real 0m5.131s 00:07:12.565 user 0m4.843s 00:07:12.565 sys 0m0.721s 00:07:12.565 ************************************ 00:07:12.565 END TEST raid1_resize_data_offset_test 00:07:12.565 ************************************ 00:07:12.565 09:51:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.566 09:51:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.566 09:51:48 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:12.566 09:51:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.566 09:51:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.566 09:51:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.566 ************************************ 00:07:12.566 START TEST raid0_resize_superblock_test 00:07:12.566 ************************************ 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59696 00:07:12.566 Process raid pid: 59696 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59696' 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59696 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 59696 ']' 00:07:12.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.566 09:51:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.566 [2024-10-21 09:51:49.043906] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:12.566 [2024-10-21 09:51:49.044140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.824 [2024-10-21 09:51:49.213412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.824 [2024-10-21 09:51:49.361118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.084 [2024-10-21 09:51:49.607000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.084 [2024-10-21 09:51:49.607147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.344 09:51:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.344 09:51:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:13.344 09:51:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:13.344 09:51:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.344 09:51:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.280 malloc0 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.280 [2024-10-21 09:51:50.531689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:14.280 [2024-10-21 09:51:50.531852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.280 [2024-10-21 09:51:50.531900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:07:14.280 [2024-10-21 09:51:50.531935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.280 [2024-10-21 09:51:50.534357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.280 [2024-10-21 09:51:50.534431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:14.280 pt0 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.280 54f46ee0-e189-4a1b-815d-738ece032116 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.280 710cacc7-8bc5-4705-8561-59e0ca98d4d6 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.280 cfdf406d-ee0b-4a7d-a740-33f70fd19316 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.280 [2024-10-21 09:51:50.738485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 710cacc7-8bc5-4705-8561-59e0ca98d4d6 is claimed 00:07:14.280 [2024-10-21 09:51:50.738684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cfdf406d-ee0b-4a7d-a740-33f70fd19316 is claimed 00:07:14.280 [2024-10-21 09:51:50.738818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:07:14.280 [2024-10-21 09:51:50.738836] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:14.280 [2024-10-21 09:51:50.739107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:07:14.280 [2024-10-21 09:51:50.739315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:07:14.280 [2024-10-21 09:51:50.739326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000005b80 00:07:14.280 [2024-10-21 09:51:50.739493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:14.280 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.281 [2024-10-21 09:51:50.850465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.281 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.540 [2024-10-21 09:51:50.894531] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.540 [2024-10-21 09:51:50.894636] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '710cacc7-8bc5-4705-8561-59e0ca98d4d6' was resized: old size 131072, new size 204800 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.540 [2024-10-21 09:51:50.906361] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.540 [2024-10-21 09:51:50.906428] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cfdf406d-ee0b-4a7d-a740-33f70fd19316' was resized: old size 131072, new size 204800 00:07:14.540 [2024-10-21 09:51:50.906485] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:14.540 09:51:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.540 [2024-10-21 09:51:51.022270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.540 [2024-10-21 09:51:51.070009] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:14.540 [2024-10-21 09:51:51.070130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:14.540 [2024-10-21 09:51:51.070145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.540 [2024-10-21 09:51:51.070161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:14.540 [2024-10-21 09:51:51.070301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.540 [2024-10-21 09:51:51.070338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.540 [2024-10-21 09:51:51.070351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Raid, state offline 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.540 [2024-10-21 09:51:51.081836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:14.540 [2024-10-21 09:51:51.081901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.540 [2024-10-21 09:51:51.081926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:14.540 [2024-10-21 09:51:51.081940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.540 [2024-10-21 09:51:51.084449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.540 [2024-10-21 09:51:51.084486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:14.540 [2024-10-21 09:51:51.086205] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 710cacc7-8bc5-4705-8561-59e0ca98d4d6 00:07:14.540 [2024-10-21 09:51:51.086277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 710cacc7-8bc5-4705-8561-59e0ca98d4d6 is claimed 00:07:14.540 [2024-10-21 09:51:51.086387] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cfdf406d-ee0b-4a7d-a740-33f70fd19316 00:07:14.540 [2024-10-21 09:51:51.086414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cfdf406d-ee0b-4a7d-a740-33f70fd19316 is claimed 00:07:14.540 [2024-10-21 09:51:51.086582] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev cfdf406d-ee0b-4a7d-a740-33f70fd19316 (2) smaller than existing raid bdev Raid (3) 00:07:14.540 [2024-10-21 09:51:51.086608] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 710cacc7-8bc5-4705-8561-59e0ca98d4d6: File exists 00:07:14.540 [2024-10-21 09:51:51.086647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:07:14.540 [2024-10-21 09:51:51.086660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:14.540 pt0 00:07:14.540 [2024-10-21 09:51:51.086905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:14.540 [2024-10-21 09:51:51.087062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:07:14.540 [2024-10-21 09:51:51.087071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000005f00 00:07:14.540 [2024-10-21 09:51:51.087224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:14.540 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:14.541 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.541 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.541 [2024-10-21 09:51:51.110486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.541 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59696 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 59696 ']' 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 59696 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59696 00:07:14.800 killing process with pid 59696 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59696' 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 59696 00:07:14.800 [2024-10-21 09:51:51.191013] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.800 [2024-10-21 09:51:51.191117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.800 09:51:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 59696 00:07:14.800 [2024-10-21 09:51:51.191176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.800 [2024-10-21 09:51:51.191188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Raid, state offline 00:07:16.178 [2024-10-21 09:51:52.715278] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.557 09:51:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:17.557 00:07:17.557 real 0m5.008s 00:07:17.557 user 0m5.064s 00:07:17.557 sys 0m0.704s 00:07:17.557 09:51:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.557 09:51:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.557 ************************************ 00:07:17.557 END TEST raid0_resize_superblock_test 00:07:17.557 ************************************ 00:07:17.557 09:51:54 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:17.557 09:51:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:17.557 09:51:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.557 09:51:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.557 ************************************ 00:07:17.557 START TEST raid1_resize_superblock_test 00:07:17.557 ************************************ 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59799 00:07:17.557 Process raid pid: 59799 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59799' 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59799 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 59799 ']' 00:07:17.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.557 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.557 [2024-10-21 09:51:54.106010] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:17.557 [2024-10-21 09:51:54.106124] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.817 [2024-10-21 09:51:54.256634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.817 [2024-10-21 09:51:54.397183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.076 [2024-10-21 09:51:54.663850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.076 [2024-10-21 09:51:54.663898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.644 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.644 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:18.644 09:51:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:18.644 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.644 09:51:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.212 malloc0 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.212 [2024-10-21 09:51:55.669061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:19.212 [2024-10-21 09:51:55.669134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.212 [2024-10-21 09:51:55.669158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:07:19.212 [2024-10-21 09:51:55.669170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.212 [2024-10-21 09:51:55.671249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.212 [2024-10-21 09:51:55.671291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:19.212 pt0 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.212 7ed9d0b6-a749-44ba-88c1-438fb56512e4 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.212 0dd7e2b2-7a9d-4c72-a9aa-e5a885a825cd 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.212 b96f7bb6-0889-4a8c-a65f-5f59a5a17051 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.212 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.212 [2024-10-21 09:51:55.802985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0dd7e2b2-7a9d-4c72-a9aa-e5a885a825cd is claimed 00:07:19.212 [2024-10-21 09:51:55.803078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b96f7bb6-0889-4a8c-a65f-5f59a5a17051 is claimed 00:07:19.212 [2024-10-21 09:51:55.803212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:07:19.212 [2024-10-21 09:51:55.803229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:19.212 [2024-10-21 09:51:55.803486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:07:19.212 [2024-10-21 09:51:55.803748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:07:19.212 [2024-10-21 09:51:55.803799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000005b80 00:07:19.212 [2024-10-21 09:51:55.803989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 [2024-10-21 09:51:55.914971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 [2024-10-21 09:51:55.942880] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:19.472 [2024-10-21 09:51:55.942908] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0dd7e2b2-7a9d-4c72-a9aa-e5a885a825cd' was resized: old size 131072, new size 204800 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 [2024-10-21 09:51:55.954824] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:19.472 [2024-10-21 09:51:55.954847] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b96f7bb6-0889-4a8c-a65f-5f59a5a17051' was resized: old size 131072, new size 204800 00:07:19.472 [2024-10-21 09:51:55.954874] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 09:51:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.472 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.732 [2024-10-21 09:51:56.066785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.732 [2024-10-21 09:51:56.094487] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:19.732 [2024-10-21 09:51:56.094561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:19.732 [2024-10-21 09:51:56.094624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:19.732 [2024-10-21 09:51:56.094785] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.732 [2024-10-21 09:51:56.094972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.732 [2024-10-21 09:51:56.095038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.732 [2024-10-21 09:51:56.095052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Raid, state offline 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.732 [2024-10-21 09:51:56.106404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:19.732 [2024-10-21 09:51:56.106508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.732 [2024-10-21 09:51:56.106547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:19.732 [2024-10-21 09:51:56.106598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.732 [2024-10-21 09:51:56.108780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.732 [2024-10-21 09:51:56.108855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:19.732 [2024-10-21 09:51:56.110564] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0dd7e2b2-7a9d-4c72-a9aa-e5a885a825cd 00:07:19.732 [2024-10-21 09:51:56.110698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0dd7e2b2-7a9d-4c72-a9aa-e5a885a825cd is claimed 00:07:19.732 [2024-10-21 09:51:56.110865] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b96f7bb6-0889-4a8c-a65f-5f59a5a17051 00:07:19.732 [2024-10-21 09:51:56.110944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b96f7bb6-0889-4a8c-a65f-5f59a5a17051 is claimed 00:07:19.732 [2024-10-21 09:51:56.111162] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b96f7bb6-0889-4a8c-a65f-5f59a5a17051 (2) smaller than existing raid bdev Raid (3) 00:07:19.732 [2024-10-21 09:51:56.111232] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 0dd7e2b2-7a9d-4c72-a9aa-e5a885a825cd: File exists 00:07:19.732 [2024-10-21 09:51:56.111301] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:07:19.732 [2024-10-21 09:51:56.111340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:19.732 pt0 00:07:19.732 [2024-10-21 09:51:56.111641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:19.732 [2024-10-21 09:51:56.111827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:07:19.732 [2024-10-21 09:51:56.111838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000005f00 00:07:19.732 [2024-10-21 09:51:56.112001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.732 [2024-10-21 09:51:56.134869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59799 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 59799 ']' 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 59799 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59799 00:07:19.732 killing process with pid 59799 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.732 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.733 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59799' 00:07:19.733 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 59799 00:07:19.733 [2024-10-21 09:51:56.208995] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.733 [2024-10-21 09:51:56.209065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.733 [2024-10-21 09:51:56.209113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.733 [2024-10-21 09:51:56.209123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Raid, state offline 00:07:19.733 09:51:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 59799 00:07:21.111 [2024-10-21 09:51:57.637555] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.490 ************************************ 00:07:22.490 END TEST raid1_resize_superblock_test 00:07:22.490 ************************************ 00:07:22.490 09:51:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:22.490 00:07:22.490 real 0m4.757s 00:07:22.490 user 0m4.767s 00:07:22.490 sys 0m0.724s 00:07:22.490 09:51:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.490 09:51:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.490 09:51:58 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:22.490 09:51:58 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:22.490 09:51:58 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:22.490 09:51:58 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:22.490 09:51:58 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:22.490 09:51:58 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:22.490 09:51:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.490 09:51:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.490 09:51:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.490 ************************************ 00:07:22.490 START TEST raid_function_test_raid0 00:07:22.490 ************************************ 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:22.490 Process raid pid: 59905 00:07:22.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=59905 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59905' 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 59905 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 59905 ']' 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.490 09:51:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.490 [2024-10-21 09:51:58.956508] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:22.490 [2024-10-21 09:51:58.957087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.749 [2024-10-21 09:51:59.119986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.749 [2024-10-21 09:51:59.240065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.008 [2024-10-21 09:51:59.469385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.008 [2024-10-21 09:51:59.469488] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.267 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.267 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:23.267 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:23.267 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.267 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:23.267 Base_1 00:07:23.267 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.267 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:23.268 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.268 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:23.527 Base_2 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:23.527 [2024-10-21 09:51:59.882134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:23.527 [2024-10-21 09:51:59.884007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:23.527 [2024-10-21 09:51:59.884117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:07:23.527 [2024-10-21 09:51:59.884156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:23.527 [2024-10-21 09:51:59.884439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:07:23.527 [2024-10-21 09:51:59.884640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:07:23.527 [2024-10-21 09:51:59.884682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000005b80 00:07:23.527 [2024-10-21 09:51:59.884880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:23.527 09:51:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:23.786 [2024-10-21 09:52:00.129738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:23.786 /dev/nbd0 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.786 1+0 records in 00:07:23.786 1+0 records out 00:07:23.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037497 s, 10.9 MB/s 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.786 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.044 { 00:07:24.044 "nbd_device": "/dev/nbd0", 00:07:24.044 "bdev_name": "raid" 00:07:24.044 } 00:07:24.044 ]' 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.044 { 00:07:24.044 "nbd_device": "/dev/nbd0", 00:07:24.044 "bdev_name": "raid" 00:07:24.044 } 00:07:24.044 ]' 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:24.044 4096+0 records in 00:07:24.044 4096+0 records out 00:07:24.044 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0274092 s, 76.5 MB/s 00:07:24.044 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:24.303 4096+0 records in 00:07:24.303 4096+0 records out 00:07:24.303 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.193472 s, 10.8 MB/s 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:24.303 128+0 records in 00:07:24.303 128+0 records out 00:07:24.303 65536 bytes (66 kB, 64 KiB) copied, 0.001356 s, 48.3 MB/s 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:24.303 2035+0 records in 00:07:24.303 2035+0 records out 00:07:24.303 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0146331 s, 71.2 MB/s 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:24.303 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:24.304 456+0 records in 00:07:24.304 456+0 records out 00:07:24.304 233472 bytes (233 kB, 228 KiB) copied, 0.00406856 s, 57.4 MB/s 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.304 09:52:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.563 [2024-10-21 09:52:01.050386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.563 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 59905 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 59905 ']' 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 59905 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59905 00:07:24.823 killing process with pid 59905 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59905' 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 59905 00:07:24.823 [2024-10-21 09:52:01.376421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.823 [2024-10-21 09:52:01.376555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.823 09:52:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 59905 00:07:24.823 [2024-10-21 09:52:01.376627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.823 [2024-10-21 09:52:01.376644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid, state offline 00:07:25.085 [2024-10-21 09:52:01.611183] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.464 09:52:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:26.464 00:07:26.464 real 0m3.985s 00:07:26.464 user 0m4.603s 00:07:26.464 sys 0m0.957s 00:07:26.464 09:52:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.464 ************************************ 00:07:26.464 END TEST raid_function_test_raid0 00:07:26.464 ************************************ 00:07:26.464 09:52:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:26.464 09:52:02 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:26.464 09:52:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.464 09:52:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.464 09:52:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.464 ************************************ 00:07:26.464 START TEST raid_function_test_concat 00:07:26.464 ************************************ 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60034 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60034' 00:07:26.464 Process raid pid: 60034 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60034 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60034 ']' 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.464 09:52:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.464 [2024-10-21 09:52:03.011626] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:26.464 [2024-10-21 09:52:03.012218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.723 [2024-10-21 09:52:03.176458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.983 [2024-10-21 09:52:03.327258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.243 [2024-10-21 09:52:03.593575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.243 [2024-10-21 09:52:03.593623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:27.503 Base_1 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:27.503 Base_2 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.503 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:27.503 [2024-10-21 09:52:03.953250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:27.503 [2024-10-21 09:52:03.955421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:27.503 [2024-10-21 09:52:03.955496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:07:27.503 [2024-10-21 09:52:03.955508] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:27.504 [2024-10-21 09:52:03.955783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:07:27.504 [2024-10-21 09:52:03.955948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:07:27.504 [2024-10-21 09:52:03.955965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000005b80 00:07:27.504 [2024-10-21 09:52:03.956113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.504 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.504 09:52:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:27.504 09:52:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:27.504 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.504 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:27.504 09:52:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:27.504 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:27.764 [2024-10-21 09:52:04.196874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:27.764 /dev/nbd0 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.764 1+0 records in 00:07:27.764 1+0 records out 00:07:27.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357136 s, 11.5 MB/s 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:27.764 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:28.024 { 00:07:28.024 "nbd_device": "/dev/nbd0", 00:07:28.024 "bdev_name": "raid" 00:07:28.024 } 00:07:28.024 ]' 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:28.024 { 00:07:28.024 "nbd_device": "/dev/nbd0", 00:07:28.024 "bdev_name": "raid" 00:07:28.024 } 00:07:28.024 ]' 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:28.024 4096+0 records in 00:07:28.024 4096+0 records out 00:07:28.024 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0330938 s, 63.4 MB/s 00:07:28.024 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:28.284 4096+0 records in 00:07:28.284 4096+0 records out 00:07:28.284 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.210694 s, 10.0 MB/s 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:28.284 128+0 records in 00:07:28.284 128+0 records out 00:07:28.284 65536 bytes (66 kB, 64 KiB) copied, 0.00109371 s, 59.9 MB/s 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:28.284 2035+0 records in 00:07:28.284 2035+0 records out 00:07:28.284 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0145294 s, 71.7 MB/s 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:28.284 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.542 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:28.542 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.542 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:28.542 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:28.543 456+0 records in 00:07:28.543 456+0 records out 00:07:28.543 233472 bytes (233 kB, 228 KiB) copied, 0.00383467 s, 60.9 MB/s 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.543 09:52:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:28.543 [2024-10-21 09:52:05.117613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:28.543 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:28.802 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.802 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.802 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.802 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.802 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.802 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.802 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:28.802 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.802 09:52:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:29.061 09:52:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:29.061 09:52:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:29.061 09:52:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60034 00:07:29.061 09:52:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60034 ']' 00:07:29.061 09:52:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60034 00:07:29.061 09:52:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:29.061 09:52:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.061 09:52:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60034 00:07:29.061 killing process with pid 60034 00:07:29.062 09:52:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.062 09:52:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.062 09:52:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60034' 00:07:29.062 09:52:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60034 00:07:29.062 [2024-10-21 09:52:05.439936] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.062 09:52:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60034 00:07:29.062 [2024-10-21 09:52:05.440069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.062 [2024-10-21 09:52:05.440131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.062 [2024-10-21 09:52:05.440144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid, state offline 00:07:29.321 [2024-10-21 09:52:05.659493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.698 09:52:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:30.698 00:07:30.698 real 0m3.961s 00:07:30.698 user 0m4.482s 00:07:30.698 sys 0m1.042s 00:07:30.698 09:52:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.698 ************************************ 00:07:30.698 END TEST raid_function_test_concat 00:07:30.698 ************************************ 00:07:30.698 09:52:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:30.698 09:52:06 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:30.698 09:52:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.698 09:52:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.698 09:52:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.698 ************************************ 00:07:30.698 START TEST raid0_resize_test 00:07:30.698 ************************************ 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60163 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60163' 00:07:30.698 Process raid pid: 60163 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60163 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60163 ']' 00:07:30.698 09:52:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.699 09:52:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.699 09:52:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.699 09:52:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.699 09:52:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.699 [2024-10-21 09:52:07.040048] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:30.699 [2024-10-21 09:52:07.040247] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.699 [2024-10-21 09:52:07.204529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.957 [2024-10-21 09:52:07.351403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.215 [2024-10-21 09:52:07.618656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.215 [2024-10-21 09:52:07.618708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.475 Base_1 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.475 Base_2 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.475 [2024-10-21 09:52:07.888824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:31.475 [2024-10-21 09:52:07.890920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:31.475 [2024-10-21 09:52:07.890976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:07:31.475 [2024-10-21 09:52:07.890987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:31.475 [2024-10-21 09:52:07.891238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:07:31.475 [2024-10-21 09:52:07.891366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:07:31.475 [2024-10-21 09:52:07.891375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000005b80 00:07:31.475 [2024-10-21 09:52:07.891499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.475 [2024-10-21 09:52:07.900748] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.475 [2024-10-21 09:52:07.900775] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:31.475 true 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.475 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.476 [2024-10-21 09:52:07.916877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.476 [2024-10-21 09:52:07.960685] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.476 [2024-10-21 09:52:07.960792] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:31.476 [2024-10-21 09:52:07.960843] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:31.476 true 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.476 [2024-10-21 09:52:07.976824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.476 09:52:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60163 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60163 ']' 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60163 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60163 00:07:31.476 killing process with pid 60163 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60163' 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60163 00:07:31.476 [2024-10-21 09:52:08.058829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.476 [2024-10-21 09:52:08.058961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.476 09:52:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60163 00:07:31.476 [2024-10-21 09:52:08.059019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.476 [2024-10-21 09:52:08.059029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Raid, state offline 00:07:31.735 [2024-10-21 09:52:08.077538] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.114 09:52:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:33.114 00:07:33.114 real 0m2.328s 00:07:33.114 user 0m2.399s 00:07:33.114 sys 0m0.394s 00:07:33.114 09:52:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.114 09:52:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.114 ************************************ 00:07:33.114 END TEST raid0_resize_test 00:07:33.114 ************************************ 00:07:33.114 09:52:09 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:33.114 09:52:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:33.114 09:52:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.114 09:52:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.115 ************************************ 00:07:33.115 START TEST raid1_resize_test 00:07:33.115 ************************************ 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60219 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60219' 00:07:33.115 Process raid pid: 60219 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60219 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60219 ']' 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.115 09:52:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.115 [2024-10-21 09:52:09.433428] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:33.115 [2024-10-21 09:52:09.433638] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.115 [2024-10-21 09:52:09.599348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.374 [2024-10-21 09:52:09.742148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.637 [2024-10-21 09:52:09.996506] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.637 [2024-10-21 09:52:09.996556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.901 Base_1 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.901 Base_2 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.901 [2024-10-21 09:52:10.286952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:33.901 [2024-10-21 09:52:10.288972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:33.901 [2024-10-21 09:52:10.289036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:07:33.901 [2024-10-21 09:52:10.289049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:33.901 [2024-10-21 09:52:10.289289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:07:33.901 [2024-10-21 09:52:10.289414] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:07:33.901 [2024-10-21 09:52:10.289423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000005b80 00:07:33.901 [2024-10-21 09:52:10.289581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.901 [2024-10-21 09:52:10.298879] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:33.901 [2024-10-21 09:52:10.298948] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:33.901 true 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.901 [2024-10-21 09:52:10.315000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.901 [2024-10-21 09:52:10.358784] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:33.901 [2024-10-21 09:52:10.358858] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:33.901 [2024-10-21 09:52:10.358924] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:33.901 true 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.901 [2024-10-21 09:52:10.374963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60219 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60219 ']' 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60219 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.901 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60219 00:07:33.901 killing process with pid 60219 00:07:33.902 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.902 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.902 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60219' 00:07:33.902 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60219 00:07:33.902 [2024-10-21 09:52:10.441551] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.902 [2024-10-21 09:52:10.441662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.902 09:52:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60219 00:07:33.902 [2024-10-21 09:52:10.442156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.902 [2024-10-21 09:52:10.442239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Raid, state offline 00:07:33.902 [2024-10-21 09:52:10.460227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.308 09:52:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:35.308 00:07:35.308 real 0m2.320s 00:07:35.308 user 0m2.369s 00:07:35.308 sys 0m0.401s 00:07:35.308 09:52:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.308 09:52:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.308 ************************************ 00:07:35.308 END TEST raid1_resize_test 00:07:35.308 ************************************ 00:07:35.308 09:52:11 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:35.308 09:52:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:35.308 09:52:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:35.308 09:52:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:35.308 09:52:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.308 09:52:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.308 ************************************ 00:07:35.308 START TEST raid_state_function_test 00:07:35.308 ************************************ 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60282 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.308 Process raid pid: 60282 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60282' 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60282 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60282 ']' 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.308 09:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.308 [2024-10-21 09:52:11.844342] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:35.308 [2024-10-21 09:52:11.844887] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.569 [2024-10-21 09:52:11.992956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.569 [2024-10-21 09:52:12.137266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.829 [2024-10-21 09:52:12.393531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.829 [2024-10-21 09:52:12.393590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.089 09:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.089 09:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:36.089 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.090 [2024-10-21 09:52:12.668450] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.090 [2024-10-21 09:52:12.668521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.090 [2024-10-21 09:52:12.668531] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.090 [2024-10-21 09:52:12.668541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.090 09:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.349 09:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.349 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.349 "name": "Existed_Raid", 00:07:36.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.349 "strip_size_kb": 64, 00:07:36.349 "state": "configuring", 00:07:36.349 "raid_level": "raid0", 00:07:36.349 "superblock": false, 00:07:36.349 "num_base_bdevs": 2, 00:07:36.349 "num_base_bdevs_discovered": 0, 00:07:36.349 "num_base_bdevs_operational": 2, 00:07:36.349 "base_bdevs_list": [ 00:07:36.349 { 00:07:36.349 "name": "BaseBdev1", 00:07:36.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.349 "is_configured": false, 00:07:36.349 "data_offset": 0, 00:07:36.349 "data_size": 0 00:07:36.349 }, 00:07:36.349 { 00:07:36.349 "name": "BaseBdev2", 00:07:36.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.349 "is_configured": false, 00:07:36.349 "data_offset": 0, 00:07:36.349 "data_size": 0 00:07:36.349 } 00:07:36.349 ] 00:07:36.349 }' 00:07:36.349 09:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.349 09:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.607 [2024-10-21 09:52:13.111735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.607 [2024-10-21 09:52:13.111902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.607 [2024-10-21 09:52:13.123661] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.607 [2024-10-21 09:52:13.123759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.607 [2024-10-21 09:52:13.123784] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.607 [2024-10-21 09:52:13.123810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.607 [2024-10-21 09:52:13.181045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.607 BaseBdev1 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.607 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.608 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.608 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.608 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.867 [ 00:07:36.867 { 00:07:36.867 "name": "BaseBdev1", 00:07:36.867 "aliases": [ 00:07:36.867 "f71ccb72-ab98-4f6f-9a64-f8ea64804674" 00:07:36.867 ], 00:07:36.867 "product_name": "Malloc disk", 00:07:36.867 "block_size": 512, 00:07:36.867 "num_blocks": 65536, 00:07:36.867 "uuid": "f71ccb72-ab98-4f6f-9a64-f8ea64804674", 00:07:36.867 "assigned_rate_limits": { 00:07:36.867 "rw_ios_per_sec": 0, 00:07:36.867 "rw_mbytes_per_sec": 0, 00:07:36.867 "r_mbytes_per_sec": 0, 00:07:36.867 "w_mbytes_per_sec": 0 00:07:36.867 }, 00:07:36.867 "claimed": true, 00:07:36.867 "claim_type": "exclusive_write", 00:07:36.867 "zoned": false, 00:07:36.867 "supported_io_types": { 00:07:36.867 "read": true, 00:07:36.867 "write": true, 00:07:36.867 "unmap": true, 00:07:36.867 "flush": true, 00:07:36.867 "reset": true, 00:07:36.867 "nvme_admin": false, 00:07:36.867 "nvme_io": false, 00:07:36.867 "nvme_io_md": false, 00:07:36.867 "write_zeroes": true, 00:07:36.867 "zcopy": true, 00:07:36.867 "get_zone_info": false, 00:07:36.867 "zone_management": false, 00:07:36.867 "zone_append": false, 00:07:36.867 "compare": false, 00:07:36.867 "compare_and_write": false, 00:07:36.867 "abort": true, 00:07:36.867 "seek_hole": false, 00:07:36.867 "seek_data": false, 00:07:36.867 "copy": true, 00:07:36.867 "nvme_iov_md": false 00:07:36.867 }, 00:07:36.867 "memory_domains": [ 00:07:36.867 { 00:07:36.867 "dma_device_id": "system", 00:07:36.867 "dma_device_type": 1 00:07:36.867 }, 00:07:36.867 { 00:07:36.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.867 "dma_device_type": 2 00:07:36.867 } 00:07:36.867 ], 00:07:36.867 "driver_specific": {} 00:07:36.867 } 00:07:36.867 ] 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.867 "name": "Existed_Raid", 00:07:36.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.867 "strip_size_kb": 64, 00:07:36.867 "state": "configuring", 00:07:36.867 "raid_level": "raid0", 00:07:36.867 "superblock": false, 00:07:36.867 "num_base_bdevs": 2, 00:07:36.867 "num_base_bdevs_discovered": 1, 00:07:36.867 "num_base_bdevs_operational": 2, 00:07:36.867 "base_bdevs_list": [ 00:07:36.867 { 00:07:36.867 "name": "BaseBdev1", 00:07:36.867 "uuid": "f71ccb72-ab98-4f6f-9a64-f8ea64804674", 00:07:36.867 "is_configured": true, 00:07:36.867 "data_offset": 0, 00:07:36.867 "data_size": 65536 00:07:36.867 }, 00:07:36.867 { 00:07:36.867 "name": "BaseBdev2", 00:07:36.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.867 "is_configured": false, 00:07:36.867 "data_offset": 0, 00:07:36.867 "data_size": 0 00:07:36.867 } 00:07:36.867 ] 00:07:36.867 }' 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.867 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.126 [2024-10-21 09:52:13.668346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.126 [2024-10-21 09:52:13.668444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.126 [2024-10-21 09:52:13.680351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.126 [2024-10-21 09:52:13.682540] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.126 [2024-10-21 09:52:13.682692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.126 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.385 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.385 "name": "Existed_Raid", 00:07:37.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.385 "strip_size_kb": 64, 00:07:37.385 "state": "configuring", 00:07:37.385 "raid_level": "raid0", 00:07:37.385 "superblock": false, 00:07:37.385 "num_base_bdevs": 2, 00:07:37.385 "num_base_bdevs_discovered": 1, 00:07:37.385 "num_base_bdevs_operational": 2, 00:07:37.385 "base_bdevs_list": [ 00:07:37.385 { 00:07:37.385 "name": "BaseBdev1", 00:07:37.385 "uuid": "f71ccb72-ab98-4f6f-9a64-f8ea64804674", 00:07:37.385 "is_configured": true, 00:07:37.385 "data_offset": 0, 00:07:37.385 "data_size": 65536 00:07:37.385 }, 00:07:37.385 { 00:07:37.385 "name": "BaseBdev2", 00:07:37.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.385 "is_configured": false, 00:07:37.385 "data_offset": 0, 00:07:37.385 "data_size": 0 00:07:37.385 } 00:07:37.385 ] 00:07:37.385 }' 00:07:37.385 09:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.385 09:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.643 [2024-10-21 09:52:14.143359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.643 [2024-10-21 09:52:14.143515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:37.643 [2024-10-21 09:52:14.143543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:37.643 [2024-10-21 09:52:14.143909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:07:37.643 [2024-10-21 09:52:14.144149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:37.643 [2024-10-21 09:52:14.144199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:07:37.643 [2024-10-21 09:52:14.144543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.643 BaseBdev2 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.643 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.643 [ 00:07:37.643 { 00:07:37.643 "name": "BaseBdev2", 00:07:37.643 "aliases": [ 00:07:37.643 "81c1130d-ed97-49cb-a1af-e98bb83e7e94" 00:07:37.643 ], 00:07:37.643 "product_name": "Malloc disk", 00:07:37.644 "block_size": 512, 00:07:37.644 "num_blocks": 65536, 00:07:37.644 "uuid": "81c1130d-ed97-49cb-a1af-e98bb83e7e94", 00:07:37.644 "assigned_rate_limits": { 00:07:37.644 "rw_ios_per_sec": 0, 00:07:37.644 "rw_mbytes_per_sec": 0, 00:07:37.644 "r_mbytes_per_sec": 0, 00:07:37.644 "w_mbytes_per_sec": 0 00:07:37.644 }, 00:07:37.644 "claimed": true, 00:07:37.644 "claim_type": "exclusive_write", 00:07:37.644 "zoned": false, 00:07:37.644 "supported_io_types": { 00:07:37.644 "read": true, 00:07:37.644 "write": true, 00:07:37.644 "unmap": true, 00:07:37.644 "flush": true, 00:07:37.644 "reset": true, 00:07:37.644 "nvme_admin": false, 00:07:37.644 "nvme_io": false, 00:07:37.644 "nvme_io_md": false, 00:07:37.644 "write_zeroes": true, 00:07:37.644 "zcopy": true, 00:07:37.644 "get_zone_info": false, 00:07:37.644 "zone_management": false, 00:07:37.644 "zone_append": false, 00:07:37.644 "compare": false, 00:07:37.644 "compare_and_write": false, 00:07:37.644 "abort": true, 00:07:37.644 "seek_hole": false, 00:07:37.644 "seek_data": false, 00:07:37.644 "copy": true, 00:07:37.644 "nvme_iov_md": false 00:07:37.644 }, 00:07:37.644 "memory_domains": [ 00:07:37.644 { 00:07:37.644 "dma_device_id": "system", 00:07:37.644 "dma_device_type": 1 00:07:37.644 }, 00:07:37.644 { 00:07:37.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.644 "dma_device_type": 2 00:07:37.644 } 00:07:37.644 ], 00:07:37.644 "driver_specific": {} 00:07:37.644 } 00:07:37.644 ] 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.644 "name": "Existed_Raid", 00:07:37.644 "uuid": "c13d9cf2-4315-4316-9199-204bb1b14ad0", 00:07:37.644 "strip_size_kb": 64, 00:07:37.644 "state": "online", 00:07:37.644 "raid_level": "raid0", 00:07:37.644 "superblock": false, 00:07:37.644 "num_base_bdevs": 2, 00:07:37.644 "num_base_bdevs_discovered": 2, 00:07:37.644 "num_base_bdevs_operational": 2, 00:07:37.644 "base_bdevs_list": [ 00:07:37.644 { 00:07:37.644 "name": "BaseBdev1", 00:07:37.644 "uuid": "f71ccb72-ab98-4f6f-9a64-f8ea64804674", 00:07:37.644 "is_configured": true, 00:07:37.644 "data_offset": 0, 00:07:37.644 "data_size": 65536 00:07:37.644 }, 00:07:37.644 { 00:07:37.644 "name": "BaseBdev2", 00:07:37.644 "uuid": "81c1130d-ed97-49cb-a1af-e98bb83e7e94", 00:07:37.644 "is_configured": true, 00:07:37.644 "data_offset": 0, 00:07:37.644 "data_size": 65536 00:07:37.644 } 00:07:37.644 ] 00:07:37.644 }' 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.644 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.211 [2024-10-21 09:52:14.602967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.211 "name": "Existed_Raid", 00:07:38.211 "aliases": [ 00:07:38.211 "c13d9cf2-4315-4316-9199-204bb1b14ad0" 00:07:38.211 ], 00:07:38.211 "product_name": "Raid Volume", 00:07:38.211 "block_size": 512, 00:07:38.211 "num_blocks": 131072, 00:07:38.211 "uuid": "c13d9cf2-4315-4316-9199-204bb1b14ad0", 00:07:38.211 "assigned_rate_limits": { 00:07:38.211 "rw_ios_per_sec": 0, 00:07:38.211 "rw_mbytes_per_sec": 0, 00:07:38.211 "r_mbytes_per_sec": 0, 00:07:38.211 "w_mbytes_per_sec": 0 00:07:38.211 }, 00:07:38.211 "claimed": false, 00:07:38.211 "zoned": false, 00:07:38.211 "supported_io_types": { 00:07:38.211 "read": true, 00:07:38.211 "write": true, 00:07:38.211 "unmap": true, 00:07:38.211 "flush": true, 00:07:38.211 "reset": true, 00:07:38.211 "nvme_admin": false, 00:07:38.211 "nvme_io": false, 00:07:38.211 "nvme_io_md": false, 00:07:38.211 "write_zeroes": true, 00:07:38.211 "zcopy": false, 00:07:38.211 "get_zone_info": false, 00:07:38.211 "zone_management": false, 00:07:38.211 "zone_append": false, 00:07:38.211 "compare": false, 00:07:38.211 "compare_and_write": false, 00:07:38.211 "abort": false, 00:07:38.211 "seek_hole": false, 00:07:38.211 "seek_data": false, 00:07:38.211 "copy": false, 00:07:38.211 "nvme_iov_md": false 00:07:38.211 }, 00:07:38.211 "memory_domains": [ 00:07:38.211 { 00:07:38.211 "dma_device_id": "system", 00:07:38.211 "dma_device_type": 1 00:07:38.211 }, 00:07:38.211 { 00:07:38.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.211 "dma_device_type": 2 00:07:38.211 }, 00:07:38.211 { 00:07:38.211 "dma_device_id": "system", 00:07:38.211 "dma_device_type": 1 00:07:38.211 }, 00:07:38.211 { 00:07:38.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.211 "dma_device_type": 2 00:07:38.211 } 00:07:38.211 ], 00:07:38.211 "driver_specific": { 00:07:38.211 "raid": { 00:07:38.211 "uuid": "c13d9cf2-4315-4316-9199-204bb1b14ad0", 00:07:38.211 "strip_size_kb": 64, 00:07:38.211 "state": "online", 00:07:38.211 "raid_level": "raid0", 00:07:38.211 "superblock": false, 00:07:38.211 "num_base_bdevs": 2, 00:07:38.211 "num_base_bdevs_discovered": 2, 00:07:38.211 "num_base_bdevs_operational": 2, 00:07:38.211 "base_bdevs_list": [ 00:07:38.211 { 00:07:38.211 "name": "BaseBdev1", 00:07:38.211 "uuid": "f71ccb72-ab98-4f6f-9a64-f8ea64804674", 00:07:38.211 "is_configured": true, 00:07:38.211 "data_offset": 0, 00:07:38.211 "data_size": 65536 00:07:38.211 }, 00:07:38.211 { 00:07:38.211 "name": "BaseBdev2", 00:07:38.211 "uuid": "81c1130d-ed97-49cb-a1af-e98bb83e7e94", 00:07:38.211 "is_configured": true, 00:07:38.211 "data_offset": 0, 00:07:38.211 "data_size": 65536 00:07:38.211 } 00:07:38.211 ] 00:07:38.211 } 00:07:38.211 } 00:07:38.211 }' 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.211 BaseBdev2' 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.211 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.470 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.470 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.470 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.470 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.470 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.470 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.470 [2024-10-21 09:52:14.850353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.470 [2024-10-21 09:52:14.850492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.470 [2024-10-21 09:52:14.850562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.471 09:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.471 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.471 "name": "Existed_Raid", 00:07:38.471 "uuid": "c13d9cf2-4315-4316-9199-204bb1b14ad0", 00:07:38.471 "strip_size_kb": 64, 00:07:38.471 "state": "offline", 00:07:38.471 "raid_level": "raid0", 00:07:38.471 "superblock": false, 00:07:38.471 "num_base_bdevs": 2, 00:07:38.471 "num_base_bdevs_discovered": 1, 00:07:38.471 "num_base_bdevs_operational": 1, 00:07:38.471 "base_bdevs_list": [ 00:07:38.471 { 00:07:38.471 "name": null, 00:07:38.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.471 "is_configured": false, 00:07:38.471 "data_offset": 0, 00:07:38.471 "data_size": 65536 00:07:38.471 }, 00:07:38.471 { 00:07:38.471 "name": "BaseBdev2", 00:07:38.471 "uuid": "81c1130d-ed97-49cb-a1af-e98bb83e7e94", 00:07:38.471 "is_configured": true, 00:07:38.471 "data_offset": 0, 00:07:38.471 "data_size": 65536 00:07:38.471 } 00:07:38.471 ] 00:07:38.471 }' 00:07:38.471 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.471 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.037 [2024-10-21 09:52:15.468512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.037 [2024-10-21 09:52:15.468686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60282 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60282 ']' 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60282 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:39.037 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.296 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60282 00:07:39.296 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.296 killing process with pid 60282 00:07:39.296 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.296 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60282' 00:07:39.296 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60282 00:07:39.296 [2024-10-21 09:52:15.669143] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.296 09:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60282 00:07:39.296 [2024-10-21 09:52:15.686473] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:40.671 00:07:40.671 real 0m5.134s 00:07:40.671 user 0m7.298s 00:07:40.671 sys 0m0.854s 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.671 ************************************ 00:07:40.671 END TEST raid_state_function_test 00:07:40.671 ************************************ 00:07:40.671 09:52:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:40.671 09:52:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:40.671 09:52:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.671 09:52:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.671 ************************************ 00:07:40.671 START TEST raid_state_function_test_sb 00:07:40.671 ************************************ 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60535 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60535' 00:07:40.671 Process raid pid: 60535 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60535 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 60535 ']' 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.671 09:52:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.671 [2024-10-21 09:52:17.036234] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:40.672 [2024-10-21 09:52:17.036440] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.672 [2024-10-21 09:52:17.200069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.930 [2024-10-21 09:52:17.340650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.188 [2024-10-21 09:52:17.587611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.188 [2024-10-21 09:52:17.587754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.448 [2024-10-21 09:52:17.869785] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.448 [2024-10-21 09:52:17.869858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.448 [2024-10-21 09:52:17.869868] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.448 [2024-10-21 09:52:17.869879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.448 "name": "Existed_Raid", 00:07:41.448 "uuid": "7bc7f090-4c90-4e0b-ba1b-a7d530297fdf", 00:07:41.448 "strip_size_kb": 64, 00:07:41.448 "state": "configuring", 00:07:41.448 "raid_level": "raid0", 00:07:41.448 "superblock": true, 00:07:41.448 "num_base_bdevs": 2, 00:07:41.448 "num_base_bdevs_discovered": 0, 00:07:41.448 "num_base_bdevs_operational": 2, 00:07:41.448 "base_bdevs_list": [ 00:07:41.448 { 00:07:41.448 "name": "BaseBdev1", 00:07:41.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.448 "is_configured": false, 00:07:41.448 "data_offset": 0, 00:07:41.448 "data_size": 0 00:07:41.448 }, 00:07:41.448 { 00:07:41.448 "name": "BaseBdev2", 00:07:41.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.448 "is_configured": false, 00:07:41.448 "data_offset": 0, 00:07:41.448 "data_size": 0 00:07:41.448 } 00:07:41.448 ] 00:07:41.448 }' 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.448 09:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.017 [2024-10-21 09:52:18.357032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.017 [2024-10-21 09:52:18.357192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.017 [2024-10-21 09:52:18.368974] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.017 [2024-10-21 09:52:18.369063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.017 [2024-10-21 09:52:18.369088] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.017 [2024-10-21 09:52:18.369113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.017 [2024-10-21 09:52:18.427369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.017 BaseBdev1 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.017 [ 00:07:42.017 { 00:07:42.017 "name": "BaseBdev1", 00:07:42.017 "aliases": [ 00:07:42.017 "c2be77ff-4ea5-46a7-a33b-1c2372aa559c" 00:07:42.017 ], 00:07:42.017 "product_name": "Malloc disk", 00:07:42.017 "block_size": 512, 00:07:42.017 "num_blocks": 65536, 00:07:42.017 "uuid": "c2be77ff-4ea5-46a7-a33b-1c2372aa559c", 00:07:42.017 "assigned_rate_limits": { 00:07:42.017 "rw_ios_per_sec": 0, 00:07:42.017 "rw_mbytes_per_sec": 0, 00:07:42.017 "r_mbytes_per_sec": 0, 00:07:42.017 "w_mbytes_per_sec": 0 00:07:42.017 }, 00:07:42.017 "claimed": true, 00:07:42.017 "claim_type": "exclusive_write", 00:07:42.017 "zoned": false, 00:07:42.017 "supported_io_types": { 00:07:42.017 "read": true, 00:07:42.017 "write": true, 00:07:42.017 "unmap": true, 00:07:42.017 "flush": true, 00:07:42.017 "reset": true, 00:07:42.017 "nvme_admin": false, 00:07:42.017 "nvme_io": false, 00:07:42.017 "nvme_io_md": false, 00:07:42.017 "write_zeroes": true, 00:07:42.017 "zcopy": true, 00:07:42.017 "get_zone_info": false, 00:07:42.017 "zone_management": false, 00:07:42.017 "zone_append": false, 00:07:42.017 "compare": false, 00:07:42.017 "compare_and_write": false, 00:07:42.017 "abort": true, 00:07:42.017 "seek_hole": false, 00:07:42.017 "seek_data": false, 00:07:42.017 "copy": true, 00:07:42.017 "nvme_iov_md": false 00:07:42.017 }, 00:07:42.017 "memory_domains": [ 00:07:42.017 { 00:07:42.017 "dma_device_id": "system", 00:07:42.017 "dma_device_type": 1 00:07:42.017 }, 00:07:42.017 { 00:07:42.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.017 "dma_device_type": 2 00:07:42.017 } 00:07:42.017 ], 00:07:42.017 "driver_specific": {} 00:07:42.017 } 00:07:42.017 ] 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.017 "name": "Existed_Raid", 00:07:42.017 "uuid": "a90d5d66-afaf-429c-8fee-c8022cfe9a8e", 00:07:42.017 "strip_size_kb": 64, 00:07:42.017 "state": "configuring", 00:07:42.017 "raid_level": "raid0", 00:07:42.017 "superblock": true, 00:07:42.017 "num_base_bdevs": 2, 00:07:42.017 "num_base_bdevs_discovered": 1, 00:07:42.017 "num_base_bdevs_operational": 2, 00:07:42.017 "base_bdevs_list": [ 00:07:42.017 { 00:07:42.017 "name": "BaseBdev1", 00:07:42.017 "uuid": "c2be77ff-4ea5-46a7-a33b-1c2372aa559c", 00:07:42.017 "is_configured": true, 00:07:42.017 "data_offset": 2048, 00:07:42.017 "data_size": 63488 00:07:42.017 }, 00:07:42.017 { 00:07:42.017 "name": "BaseBdev2", 00:07:42.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.017 "is_configured": false, 00:07:42.017 "data_offset": 0, 00:07:42.017 "data_size": 0 00:07:42.017 } 00:07:42.017 ] 00:07:42.017 }' 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.017 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.276 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.276 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.276 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.276 [2024-10-21 09:52:18.862707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.276 [2024-10-21 09:52:18.862865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:07:42.276 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.276 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.276 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.276 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.276 [2024-10-21 09:52:18.870768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.534 [2024-10-21 09:52:18.872875] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.534 [2024-10-21 09:52:18.872954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.534 "name": "Existed_Raid", 00:07:42.534 "uuid": "c50e0427-5062-4c3e-94fa-31a5a678a130", 00:07:42.534 "strip_size_kb": 64, 00:07:42.534 "state": "configuring", 00:07:42.534 "raid_level": "raid0", 00:07:42.534 "superblock": true, 00:07:42.534 "num_base_bdevs": 2, 00:07:42.534 "num_base_bdevs_discovered": 1, 00:07:42.534 "num_base_bdevs_operational": 2, 00:07:42.534 "base_bdevs_list": [ 00:07:42.534 { 00:07:42.534 "name": "BaseBdev1", 00:07:42.534 "uuid": "c2be77ff-4ea5-46a7-a33b-1c2372aa559c", 00:07:42.534 "is_configured": true, 00:07:42.534 "data_offset": 2048, 00:07:42.534 "data_size": 63488 00:07:42.534 }, 00:07:42.534 { 00:07:42.534 "name": "BaseBdev2", 00:07:42.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.534 "is_configured": false, 00:07:42.534 "data_offset": 0, 00:07:42.534 "data_size": 0 00:07:42.534 } 00:07:42.534 ] 00:07:42.534 }' 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.534 09:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.793 [2024-10-21 09:52:19.351691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.793 [2024-10-21 09:52:19.352075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:42.793 [2024-10-21 09:52:19.352128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.793 [2024-10-21 09:52:19.352433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:07:42.793 [2024-10-21 09:52:19.352663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:42.793 BaseBdev2 00:07:42.793 [2024-10-21 09:52:19.352712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:07:42.793 [2024-10-21 09:52:19.352881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.793 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.793 [ 00:07:42.793 { 00:07:42.793 "name": "BaseBdev2", 00:07:42.793 "aliases": [ 00:07:42.793 "6e943655-6f59-4c31-b9ff-c0f2f14e377b" 00:07:42.793 ], 00:07:42.793 "product_name": "Malloc disk", 00:07:42.793 "block_size": 512, 00:07:42.793 "num_blocks": 65536, 00:07:42.793 "uuid": "6e943655-6f59-4c31-b9ff-c0f2f14e377b", 00:07:42.793 "assigned_rate_limits": { 00:07:42.793 "rw_ios_per_sec": 0, 00:07:42.793 "rw_mbytes_per_sec": 0, 00:07:42.793 "r_mbytes_per_sec": 0, 00:07:42.793 "w_mbytes_per_sec": 0 00:07:42.793 }, 00:07:42.793 "claimed": true, 00:07:42.793 "claim_type": "exclusive_write", 00:07:42.793 "zoned": false, 00:07:42.793 "supported_io_types": { 00:07:42.793 "read": true, 00:07:42.793 "write": true, 00:07:42.793 "unmap": true, 00:07:42.793 "flush": true, 00:07:42.793 "reset": true, 00:07:42.793 "nvme_admin": false, 00:07:42.793 "nvme_io": false, 00:07:42.793 "nvme_io_md": false, 00:07:42.793 "write_zeroes": true, 00:07:42.793 "zcopy": true, 00:07:42.793 "get_zone_info": false, 00:07:42.793 "zone_management": false, 00:07:42.793 "zone_append": false, 00:07:42.793 "compare": false, 00:07:42.793 "compare_and_write": false, 00:07:42.793 "abort": true, 00:07:42.793 "seek_hole": false, 00:07:42.793 "seek_data": false, 00:07:42.793 "copy": true, 00:07:42.793 "nvme_iov_md": false 00:07:42.793 }, 00:07:42.793 "memory_domains": [ 00:07:42.793 { 00:07:42.793 "dma_device_id": "system", 00:07:42.793 "dma_device_type": 1 00:07:42.793 }, 00:07:42.793 { 00:07:42.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.793 "dma_device_type": 2 00:07:42.793 } 00:07:42.793 ], 00:07:42.793 "driver_specific": {} 00:07:42.793 } 00:07:42.793 ] 00:07:43.051 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.051 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:43.051 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:43.051 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.052 "name": "Existed_Raid", 00:07:43.052 "uuid": "c50e0427-5062-4c3e-94fa-31a5a678a130", 00:07:43.052 "strip_size_kb": 64, 00:07:43.052 "state": "online", 00:07:43.052 "raid_level": "raid0", 00:07:43.052 "superblock": true, 00:07:43.052 "num_base_bdevs": 2, 00:07:43.052 "num_base_bdevs_discovered": 2, 00:07:43.052 "num_base_bdevs_operational": 2, 00:07:43.052 "base_bdevs_list": [ 00:07:43.052 { 00:07:43.052 "name": "BaseBdev1", 00:07:43.052 "uuid": "c2be77ff-4ea5-46a7-a33b-1c2372aa559c", 00:07:43.052 "is_configured": true, 00:07:43.052 "data_offset": 2048, 00:07:43.052 "data_size": 63488 00:07:43.052 }, 00:07:43.052 { 00:07:43.052 "name": "BaseBdev2", 00:07:43.052 "uuid": "6e943655-6f59-4c31-b9ff-c0f2f14e377b", 00:07:43.052 "is_configured": true, 00:07:43.052 "data_offset": 2048, 00:07:43.052 "data_size": 63488 00:07:43.052 } 00:07:43.052 ] 00:07:43.052 }' 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.052 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.318 [2024-10-21 09:52:19.827244] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.318 "name": "Existed_Raid", 00:07:43.318 "aliases": [ 00:07:43.318 "c50e0427-5062-4c3e-94fa-31a5a678a130" 00:07:43.318 ], 00:07:43.318 "product_name": "Raid Volume", 00:07:43.318 "block_size": 512, 00:07:43.318 "num_blocks": 126976, 00:07:43.318 "uuid": "c50e0427-5062-4c3e-94fa-31a5a678a130", 00:07:43.318 "assigned_rate_limits": { 00:07:43.318 "rw_ios_per_sec": 0, 00:07:43.318 "rw_mbytes_per_sec": 0, 00:07:43.318 "r_mbytes_per_sec": 0, 00:07:43.318 "w_mbytes_per_sec": 0 00:07:43.318 }, 00:07:43.318 "claimed": false, 00:07:43.318 "zoned": false, 00:07:43.318 "supported_io_types": { 00:07:43.318 "read": true, 00:07:43.318 "write": true, 00:07:43.318 "unmap": true, 00:07:43.318 "flush": true, 00:07:43.318 "reset": true, 00:07:43.318 "nvme_admin": false, 00:07:43.318 "nvme_io": false, 00:07:43.318 "nvme_io_md": false, 00:07:43.318 "write_zeroes": true, 00:07:43.318 "zcopy": false, 00:07:43.318 "get_zone_info": false, 00:07:43.318 "zone_management": false, 00:07:43.318 "zone_append": false, 00:07:43.318 "compare": false, 00:07:43.318 "compare_and_write": false, 00:07:43.318 "abort": false, 00:07:43.318 "seek_hole": false, 00:07:43.318 "seek_data": false, 00:07:43.318 "copy": false, 00:07:43.318 "nvme_iov_md": false 00:07:43.318 }, 00:07:43.318 "memory_domains": [ 00:07:43.318 { 00:07:43.318 "dma_device_id": "system", 00:07:43.318 "dma_device_type": 1 00:07:43.318 }, 00:07:43.318 { 00:07:43.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.318 "dma_device_type": 2 00:07:43.318 }, 00:07:43.318 { 00:07:43.318 "dma_device_id": "system", 00:07:43.318 "dma_device_type": 1 00:07:43.318 }, 00:07:43.318 { 00:07:43.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.318 "dma_device_type": 2 00:07:43.318 } 00:07:43.318 ], 00:07:43.318 "driver_specific": { 00:07:43.318 "raid": { 00:07:43.318 "uuid": "c50e0427-5062-4c3e-94fa-31a5a678a130", 00:07:43.318 "strip_size_kb": 64, 00:07:43.318 "state": "online", 00:07:43.318 "raid_level": "raid0", 00:07:43.318 "superblock": true, 00:07:43.318 "num_base_bdevs": 2, 00:07:43.318 "num_base_bdevs_discovered": 2, 00:07:43.318 "num_base_bdevs_operational": 2, 00:07:43.318 "base_bdevs_list": [ 00:07:43.318 { 00:07:43.318 "name": "BaseBdev1", 00:07:43.318 "uuid": "c2be77ff-4ea5-46a7-a33b-1c2372aa559c", 00:07:43.318 "is_configured": true, 00:07:43.318 "data_offset": 2048, 00:07:43.318 "data_size": 63488 00:07:43.318 }, 00:07:43.318 { 00:07:43.318 "name": "BaseBdev2", 00:07:43.318 "uuid": "6e943655-6f59-4c31-b9ff-c0f2f14e377b", 00:07:43.318 "is_configured": true, 00:07:43.318 "data_offset": 2048, 00:07:43.318 "data_size": 63488 00:07:43.318 } 00:07:43.318 ] 00:07:43.318 } 00:07:43.318 } 00:07:43.318 }' 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.318 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:43.318 BaseBdev2' 00:07:43.319 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.590 09:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.590 [2024-10-21 09:52:20.050771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:43.590 [2024-10-21 09:52:20.050821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.590 [2024-10-21 09:52:20.050878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:43.590 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.591 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.848 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.848 "name": "Existed_Raid", 00:07:43.848 "uuid": "c50e0427-5062-4c3e-94fa-31a5a678a130", 00:07:43.848 "strip_size_kb": 64, 00:07:43.848 "state": "offline", 00:07:43.848 "raid_level": "raid0", 00:07:43.848 "superblock": true, 00:07:43.848 "num_base_bdevs": 2, 00:07:43.848 "num_base_bdevs_discovered": 1, 00:07:43.848 "num_base_bdevs_operational": 1, 00:07:43.848 "base_bdevs_list": [ 00:07:43.848 { 00:07:43.848 "name": null, 00:07:43.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.848 "is_configured": false, 00:07:43.848 "data_offset": 0, 00:07:43.848 "data_size": 63488 00:07:43.848 }, 00:07:43.848 { 00:07:43.848 "name": "BaseBdev2", 00:07:43.848 "uuid": "6e943655-6f59-4c31-b9ff-c0f2f14e377b", 00:07:43.848 "is_configured": true, 00:07:43.848 "data_offset": 2048, 00:07:43.848 "data_size": 63488 00:07:43.848 } 00:07:43.848 ] 00:07:43.848 }' 00:07:43.848 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.848 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.105 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:44.105 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:44.105 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:44.105 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.106 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.106 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.106 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.106 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:44.106 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:44.106 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:44.106 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.106 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.106 [2024-10-21 09:52:20.644604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:44.106 [2024-10-21 09:52:20.644763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60535 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 60535 ']' 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 60535 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60535 00:07:44.364 killing process with pid 60535 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60535' 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 60535 00:07:44.364 [2024-10-21 09:52:20.841595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.364 09:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 60535 00:07:44.364 [2024-10-21 09:52:20.860947] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.741 09:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:45.741 00:07:45.741 real 0m5.122s 00:07:45.741 user 0m7.248s 00:07:45.741 sys 0m0.892s 00:07:45.741 09:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.741 ************************************ 00:07:45.741 END TEST raid_state_function_test_sb 00:07:45.741 ************************************ 00:07:45.741 09:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.741 09:52:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:45.741 09:52:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:45.741 09:52:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.741 09:52:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.741 ************************************ 00:07:45.741 START TEST raid_superblock_test 00:07:45.741 ************************************ 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60787 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60787 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60787 ']' 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.741 09:52:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.741 [2024-10-21 09:52:22.218310] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:45.741 [2024-10-21 09:52:22.218512] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60787 ] 00:07:46.000 [2024-10-21 09:52:22.381177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.000 [2024-10-21 09:52:22.525900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.258 [2024-10-21 09:52:22.764729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.258 [2024-10-21 09:52:22.764899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.517 malloc1 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.517 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.517 [2024-10-21 09:52:23.107406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:46.517 [2024-10-21 09:52:23.107487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.517 [2024-10-21 09:52:23.107513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:07:46.517 [2024-10-21 09:52:23.107523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.517 [2024-10-21 09:52:23.109838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.517 [2024-10-21 09:52:23.109870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:46.777 pt1 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.777 malloc2 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.777 [2024-10-21 09:52:23.170506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.777 [2024-10-21 09:52:23.170649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.777 [2024-10-21 09:52:23.170687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:07:46.777 [2024-10-21 09:52:23.170715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.777 [2024-10-21 09:52:23.172918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.777 [2024-10-21 09:52:23.172984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.777 pt2 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.777 [2024-10-21 09:52:23.182576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.777 [2024-10-21 09:52:23.184557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.777 [2024-10-21 09:52:23.184759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:07:46.777 [2024-10-21 09:52:23.184805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.777 [2024-10-21 09:52:23.185050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:07:46.777 [2024-10-21 09:52:23.185274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:07:46.777 [2024-10-21 09:52:23.185319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:07:46.777 [2024-10-21 09:52:23.185489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.777 "name": "raid_bdev1", 00:07:46.777 "uuid": "ad27826c-222a-4367-ab77-3f30540638e0", 00:07:46.777 "strip_size_kb": 64, 00:07:46.777 "state": "online", 00:07:46.777 "raid_level": "raid0", 00:07:46.777 "superblock": true, 00:07:46.777 "num_base_bdevs": 2, 00:07:46.777 "num_base_bdevs_discovered": 2, 00:07:46.777 "num_base_bdevs_operational": 2, 00:07:46.777 "base_bdevs_list": [ 00:07:46.777 { 00:07:46.777 "name": "pt1", 00:07:46.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.777 "is_configured": true, 00:07:46.777 "data_offset": 2048, 00:07:46.777 "data_size": 63488 00:07:46.777 }, 00:07:46.777 { 00:07:46.777 "name": "pt2", 00:07:46.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.777 "is_configured": true, 00:07:46.777 "data_offset": 2048, 00:07:46.777 "data_size": 63488 00:07:46.777 } 00:07:46.777 ] 00:07:46.777 }' 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.777 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.036 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:47.036 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:47.036 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.036 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.036 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.036 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.036 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.036 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.036 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.036 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.295 [2024-10-21 09:52:23.630085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.295 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.295 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.295 "name": "raid_bdev1", 00:07:47.295 "aliases": [ 00:07:47.295 "ad27826c-222a-4367-ab77-3f30540638e0" 00:07:47.295 ], 00:07:47.295 "product_name": "Raid Volume", 00:07:47.295 "block_size": 512, 00:07:47.295 "num_blocks": 126976, 00:07:47.295 "uuid": "ad27826c-222a-4367-ab77-3f30540638e0", 00:07:47.295 "assigned_rate_limits": { 00:07:47.295 "rw_ios_per_sec": 0, 00:07:47.295 "rw_mbytes_per_sec": 0, 00:07:47.295 "r_mbytes_per_sec": 0, 00:07:47.295 "w_mbytes_per_sec": 0 00:07:47.295 }, 00:07:47.295 "claimed": false, 00:07:47.295 "zoned": false, 00:07:47.295 "supported_io_types": { 00:07:47.295 "read": true, 00:07:47.295 "write": true, 00:07:47.295 "unmap": true, 00:07:47.295 "flush": true, 00:07:47.295 "reset": true, 00:07:47.295 "nvme_admin": false, 00:07:47.295 "nvme_io": false, 00:07:47.295 "nvme_io_md": false, 00:07:47.295 "write_zeroes": true, 00:07:47.295 "zcopy": false, 00:07:47.295 "get_zone_info": false, 00:07:47.295 "zone_management": false, 00:07:47.295 "zone_append": false, 00:07:47.295 "compare": false, 00:07:47.295 "compare_and_write": false, 00:07:47.295 "abort": false, 00:07:47.295 "seek_hole": false, 00:07:47.295 "seek_data": false, 00:07:47.295 "copy": false, 00:07:47.295 "nvme_iov_md": false 00:07:47.295 }, 00:07:47.295 "memory_domains": [ 00:07:47.295 { 00:07:47.295 "dma_device_id": "system", 00:07:47.295 "dma_device_type": 1 00:07:47.295 }, 00:07:47.295 { 00:07:47.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.295 "dma_device_type": 2 00:07:47.295 }, 00:07:47.295 { 00:07:47.295 "dma_device_id": "system", 00:07:47.295 "dma_device_type": 1 00:07:47.295 }, 00:07:47.295 { 00:07:47.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.295 "dma_device_type": 2 00:07:47.295 } 00:07:47.295 ], 00:07:47.295 "driver_specific": { 00:07:47.295 "raid": { 00:07:47.295 "uuid": "ad27826c-222a-4367-ab77-3f30540638e0", 00:07:47.295 "strip_size_kb": 64, 00:07:47.295 "state": "online", 00:07:47.295 "raid_level": "raid0", 00:07:47.295 "superblock": true, 00:07:47.295 "num_base_bdevs": 2, 00:07:47.295 "num_base_bdevs_discovered": 2, 00:07:47.295 "num_base_bdevs_operational": 2, 00:07:47.295 "base_bdevs_list": [ 00:07:47.295 { 00:07:47.295 "name": "pt1", 00:07:47.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.295 "is_configured": true, 00:07:47.295 "data_offset": 2048, 00:07:47.295 "data_size": 63488 00:07:47.295 }, 00:07:47.295 { 00:07:47.295 "name": "pt2", 00:07:47.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.295 "is_configured": true, 00:07:47.295 "data_offset": 2048, 00:07:47.295 "data_size": 63488 00:07:47.295 } 00:07:47.295 ] 00:07:47.295 } 00:07:47.295 } 00:07:47.295 }' 00:07:47.295 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.295 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:47.295 pt2' 00:07:47.295 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.295 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.295 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.295 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.296 [2024-10-21 09:52:23.857604] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ad27826c-222a-4367-ab77-3f30540638e0 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ad27826c-222a-4367-ab77-3f30540638e0 ']' 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.296 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.296 [2024-10-21 09:52:23.885292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.296 [2024-10-21 09:52:23.885325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.296 [2024-10-21 09:52:23.885441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.296 [2024-10-21 09:52:23.885498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.296 [2024-10-21 09:52:23.885512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:47.555 09:52:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.555 [2024-10-21 09:52:24.025088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:47.555 [2024-10-21 09:52:24.027322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:47.555 [2024-10-21 09:52:24.027409] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:47.555 [2024-10-21 09:52:24.027472] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:47.555 [2024-10-21 09:52:24.027488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.555 [2024-10-21 09:52:24.027500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:07:47.555 request: 00:07:47.555 { 00:07:47.555 "name": "raid_bdev1", 00:07:47.555 "raid_level": "raid0", 00:07:47.555 "base_bdevs": [ 00:07:47.555 "malloc1", 00:07:47.555 "malloc2" 00:07:47.555 ], 00:07:47.555 "strip_size_kb": 64, 00:07:47.555 "superblock": false, 00:07:47.555 "method": "bdev_raid_create", 00:07:47.555 "req_id": 1 00:07:47.555 } 00:07:47.555 Got JSON-RPC error response 00:07:47.555 response: 00:07:47.555 { 00:07:47.555 "code": -17, 00:07:47.555 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:47.555 } 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.555 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.555 [2024-10-21 09:52:24.092937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:47.555 [2024-10-21 09:52:24.093094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.555 [2024-10-21 09:52:24.093136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:47.555 [2024-10-21 09:52:24.093183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.555 [2024-10-21 09:52:24.095796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.555 [2024-10-21 09:52:24.095879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:47.555 [2024-10-21 09:52:24.095992] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:47.555 [2024-10-21 09:52:24.096086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:47.555 pt1 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.556 "name": "raid_bdev1", 00:07:47.556 "uuid": "ad27826c-222a-4367-ab77-3f30540638e0", 00:07:47.556 "strip_size_kb": 64, 00:07:47.556 "state": "configuring", 00:07:47.556 "raid_level": "raid0", 00:07:47.556 "superblock": true, 00:07:47.556 "num_base_bdevs": 2, 00:07:47.556 "num_base_bdevs_discovered": 1, 00:07:47.556 "num_base_bdevs_operational": 2, 00:07:47.556 "base_bdevs_list": [ 00:07:47.556 { 00:07:47.556 "name": "pt1", 00:07:47.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.556 "is_configured": true, 00:07:47.556 "data_offset": 2048, 00:07:47.556 "data_size": 63488 00:07:47.556 }, 00:07:47.556 { 00:07:47.556 "name": null, 00:07:47.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.556 "is_configured": false, 00:07:47.556 "data_offset": 2048, 00:07:47.556 "data_size": 63488 00:07:47.556 } 00:07:47.556 ] 00:07:47.556 }' 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.556 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.123 [2024-10-21 09:52:24.536196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:48.123 [2024-10-21 09:52:24.536385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.123 [2024-10-21 09:52:24.536414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:07:48.123 [2024-10-21 09:52:24.536427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.123 [2024-10-21 09:52:24.537004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.123 [2024-10-21 09:52:24.537028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:48.123 [2024-10-21 09:52:24.537121] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:48.123 [2024-10-21 09:52:24.537150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:48.123 [2024-10-21 09:52:24.537268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:48.123 [2024-10-21 09:52:24.537279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.123 [2024-10-21 09:52:24.537524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:48.123 [2024-10-21 09:52:24.537698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:48.123 [2024-10-21 09:52:24.537715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:48.123 [2024-10-21 09:52:24.537873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.123 pt2 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.123 "name": "raid_bdev1", 00:07:48.123 "uuid": "ad27826c-222a-4367-ab77-3f30540638e0", 00:07:48.123 "strip_size_kb": 64, 00:07:48.123 "state": "online", 00:07:48.123 "raid_level": "raid0", 00:07:48.123 "superblock": true, 00:07:48.123 "num_base_bdevs": 2, 00:07:48.123 "num_base_bdevs_discovered": 2, 00:07:48.123 "num_base_bdevs_operational": 2, 00:07:48.123 "base_bdevs_list": [ 00:07:48.123 { 00:07:48.123 "name": "pt1", 00:07:48.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.123 "is_configured": true, 00:07:48.123 "data_offset": 2048, 00:07:48.123 "data_size": 63488 00:07:48.123 }, 00:07:48.123 { 00:07:48.123 "name": "pt2", 00:07:48.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.123 "is_configured": true, 00:07:48.123 "data_offset": 2048, 00:07:48.123 "data_size": 63488 00:07:48.123 } 00:07:48.123 ] 00:07:48.123 }' 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.123 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.382 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:48.382 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:48.382 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.382 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.382 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.382 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.382 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.382 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.382 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.382 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.382 [2024-10-21 09:52:24.971822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.641 09:52:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.641 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.641 "name": "raid_bdev1", 00:07:48.641 "aliases": [ 00:07:48.641 "ad27826c-222a-4367-ab77-3f30540638e0" 00:07:48.641 ], 00:07:48.641 "product_name": "Raid Volume", 00:07:48.641 "block_size": 512, 00:07:48.641 "num_blocks": 126976, 00:07:48.641 "uuid": "ad27826c-222a-4367-ab77-3f30540638e0", 00:07:48.641 "assigned_rate_limits": { 00:07:48.641 "rw_ios_per_sec": 0, 00:07:48.641 "rw_mbytes_per_sec": 0, 00:07:48.641 "r_mbytes_per_sec": 0, 00:07:48.641 "w_mbytes_per_sec": 0 00:07:48.641 }, 00:07:48.641 "claimed": false, 00:07:48.641 "zoned": false, 00:07:48.641 "supported_io_types": { 00:07:48.641 "read": true, 00:07:48.641 "write": true, 00:07:48.641 "unmap": true, 00:07:48.641 "flush": true, 00:07:48.641 "reset": true, 00:07:48.641 "nvme_admin": false, 00:07:48.641 "nvme_io": false, 00:07:48.641 "nvme_io_md": false, 00:07:48.641 "write_zeroes": true, 00:07:48.641 "zcopy": false, 00:07:48.641 "get_zone_info": false, 00:07:48.641 "zone_management": false, 00:07:48.641 "zone_append": false, 00:07:48.641 "compare": false, 00:07:48.641 "compare_and_write": false, 00:07:48.641 "abort": false, 00:07:48.641 "seek_hole": false, 00:07:48.641 "seek_data": false, 00:07:48.641 "copy": false, 00:07:48.641 "nvme_iov_md": false 00:07:48.641 }, 00:07:48.641 "memory_domains": [ 00:07:48.641 { 00:07:48.641 "dma_device_id": "system", 00:07:48.641 "dma_device_type": 1 00:07:48.641 }, 00:07:48.641 { 00:07:48.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.641 "dma_device_type": 2 00:07:48.641 }, 00:07:48.641 { 00:07:48.641 "dma_device_id": "system", 00:07:48.641 "dma_device_type": 1 00:07:48.641 }, 00:07:48.641 { 00:07:48.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.641 "dma_device_type": 2 00:07:48.641 } 00:07:48.641 ], 00:07:48.641 "driver_specific": { 00:07:48.641 "raid": { 00:07:48.641 "uuid": "ad27826c-222a-4367-ab77-3f30540638e0", 00:07:48.641 "strip_size_kb": 64, 00:07:48.641 "state": "online", 00:07:48.641 "raid_level": "raid0", 00:07:48.641 "superblock": true, 00:07:48.641 "num_base_bdevs": 2, 00:07:48.641 "num_base_bdevs_discovered": 2, 00:07:48.641 "num_base_bdevs_operational": 2, 00:07:48.641 "base_bdevs_list": [ 00:07:48.641 { 00:07:48.641 "name": "pt1", 00:07:48.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.641 "is_configured": true, 00:07:48.641 "data_offset": 2048, 00:07:48.641 "data_size": 63488 00:07:48.641 }, 00:07:48.641 { 00:07:48.641 "name": "pt2", 00:07:48.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.641 "is_configured": true, 00:07:48.641 "data_offset": 2048, 00:07:48.641 "data_size": 63488 00:07:48.641 } 00:07:48.641 ] 00:07:48.641 } 00:07:48.641 } 00:07:48.641 }' 00:07:48.641 09:52:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:48.641 pt2' 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.641 [2024-10-21 09:52:25.195397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.641 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ad27826c-222a-4367-ab77-3f30540638e0 '!=' ad27826c-222a-4367-ab77-3f30540638e0 ']' 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60787 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60787 ']' 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60787 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60787 00:07:48.901 killing process with pid 60787 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60787' 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 60787 00:07:48.901 [2024-10-21 09:52:25.265983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.901 09:52:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 60787 00:07:48.901 [2024-10-21 09:52:25.266120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.901 [2024-10-21 09:52:25.266179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.901 [2024-10-21 09:52:25.266192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:48.901 [2024-10-21 09:52:25.482261] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.282 ************************************ 00:07:50.282 END TEST raid_superblock_test 00:07:50.282 ************************************ 00:07:50.282 09:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:50.282 00:07:50.282 real 0m4.563s 00:07:50.282 user 0m6.270s 00:07:50.282 sys 0m0.804s 00:07:50.282 09:52:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.282 09:52:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.282 09:52:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:50.282 09:52:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:50.282 09:52:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.282 09:52:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.282 ************************************ 00:07:50.282 START TEST raid_read_error_test 00:07:50.282 ************************************ 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.b1rUIlyggF 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60993 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60993 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 60993 ']' 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.282 09:52:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.282 [2024-10-21 09:52:26.867817] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:50.282 [2024-10-21 09:52:26.868011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60993 ] 00:07:50.541 [2024-10-21 09:52:27.016824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.800 [2024-10-21 09:52:27.162459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.059 [2024-10-21 09:52:27.415205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.059 [2024-10-21 09:52:27.415272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.317 BaseBdev1_malloc 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.317 true 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.317 [2024-10-21 09:52:27.746907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:51.317 [2024-10-21 09:52:27.746978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.317 [2024-10-21 09:52:27.746996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:51.317 [2024-10-21 09:52:27.747012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.317 [2024-10-21 09:52:27.749349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.317 [2024-10-21 09:52:27.749389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:51.317 BaseBdev1 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.317 BaseBdev2_malloc 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.317 true 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.317 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.317 [2024-10-21 09:52:27.823389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:51.317 [2024-10-21 09:52:27.823456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.317 [2024-10-21 09:52:27.823473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:51.317 [2024-10-21 09:52:27.823485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.318 [2024-10-21 09:52:27.825859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.318 [2024-10-21 09:52:27.825896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:51.318 BaseBdev2 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.318 [2024-10-21 09:52:27.835433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.318 [2024-10-21 09:52:27.837604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.318 [2024-10-21 09:52:27.837812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:51.318 [2024-10-21 09:52:27.837827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:51.318 [2024-10-21 09:52:27.838056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:07:51.318 [2024-10-21 09:52:27.838243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:51.318 [2024-10-21 09:52:27.838253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:51.318 [2024-10-21 09:52:27.838440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.318 "name": "raid_bdev1", 00:07:51.318 "uuid": "9181bc05-e69f-4bf8-a49a-a329d3bb93ba", 00:07:51.318 "strip_size_kb": 64, 00:07:51.318 "state": "online", 00:07:51.318 "raid_level": "raid0", 00:07:51.318 "superblock": true, 00:07:51.318 "num_base_bdevs": 2, 00:07:51.318 "num_base_bdevs_discovered": 2, 00:07:51.318 "num_base_bdevs_operational": 2, 00:07:51.318 "base_bdevs_list": [ 00:07:51.318 { 00:07:51.318 "name": "BaseBdev1", 00:07:51.318 "uuid": "49927e25-610e-5792-b8f4-f024a4a7528c", 00:07:51.318 "is_configured": true, 00:07:51.318 "data_offset": 2048, 00:07:51.318 "data_size": 63488 00:07:51.318 }, 00:07:51.318 { 00:07:51.318 "name": "BaseBdev2", 00:07:51.318 "uuid": "e11b1f5e-5aa0-5090-ab69-0076df8b255c", 00:07:51.318 "is_configured": true, 00:07:51.318 "data_offset": 2048, 00:07:51.318 "data_size": 63488 00:07:51.318 } 00:07:51.318 ] 00:07:51.318 }' 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.318 09:52:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.885 09:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:51.885 09:52:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:51.885 [2024-10-21 09:52:28.328044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.821 "name": "raid_bdev1", 00:07:52.821 "uuid": "9181bc05-e69f-4bf8-a49a-a329d3bb93ba", 00:07:52.821 "strip_size_kb": 64, 00:07:52.821 "state": "online", 00:07:52.821 "raid_level": "raid0", 00:07:52.821 "superblock": true, 00:07:52.821 "num_base_bdevs": 2, 00:07:52.821 "num_base_bdevs_discovered": 2, 00:07:52.821 "num_base_bdevs_operational": 2, 00:07:52.821 "base_bdevs_list": [ 00:07:52.821 { 00:07:52.821 "name": "BaseBdev1", 00:07:52.821 "uuid": "49927e25-610e-5792-b8f4-f024a4a7528c", 00:07:52.821 "is_configured": true, 00:07:52.821 "data_offset": 2048, 00:07:52.821 "data_size": 63488 00:07:52.821 }, 00:07:52.821 { 00:07:52.821 "name": "BaseBdev2", 00:07:52.821 "uuid": "e11b1f5e-5aa0-5090-ab69-0076df8b255c", 00:07:52.821 "is_configured": true, 00:07:52.821 "data_offset": 2048, 00:07:52.821 "data_size": 63488 00:07:52.821 } 00:07:52.821 ] 00:07:52.821 }' 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.821 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.395 [2024-10-21 09:52:29.688252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.395 [2024-10-21 09:52:29.688402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.395 [2024-10-21 09:52:29.691015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.395 [2024-10-21 09:52:29.691105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.395 [2024-10-21 09:52:29.691159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.395 [2024-10-21 09:52:29.691201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.395 { 00:07:53.395 "results": [ 00:07:53.395 { 00:07:53.395 "job": "raid_bdev1", 00:07:53.395 "core_mask": "0x1", 00:07:53.395 "workload": "randrw", 00:07:53.395 "percentage": 50, 00:07:53.395 "status": "finished", 00:07:53.395 "queue_depth": 1, 00:07:53.395 "io_size": 131072, 00:07:53.395 "runtime": 1.360932, 00:07:53.395 "iops": 14857.465325233003, 00:07:53.395 "mibps": 1857.1831656541253, 00:07:53.395 "io_failed": 1, 00:07:53.395 "io_timeout": 0, 00:07:53.395 "avg_latency_us": 94.6731891204807, 00:07:53.395 "min_latency_us": 24.817467248908297, 00:07:53.395 "max_latency_us": 1316.4436681222708 00:07:53.395 } 00:07:53.395 ], 00:07:53.395 "core_count": 1 00:07:53.395 } 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60993 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 60993 ']' 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 60993 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60993 00:07:53.395 killing process with pid 60993 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60993' 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 60993 00:07:53.395 [2024-10-21 09:52:29.736637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.395 09:52:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 60993 00:07:53.395 [2024-10-21 09:52:29.883626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.816 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.b1rUIlyggF 00:07:54.816 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:54.816 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:54.816 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:54.816 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:54.816 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.816 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:54.816 09:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:54.816 ************************************ 00:07:54.816 END TEST raid_read_error_test 00:07:54.816 ************************************ 00:07:54.816 00:07:54.816 real 0m4.405s 00:07:54.816 user 0m5.160s 00:07:54.816 sys 0m0.600s 00:07:54.816 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.816 09:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.816 09:52:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:54.816 09:52:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:54.816 09:52:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.816 09:52:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.816 ************************************ 00:07:54.816 START TEST raid_write_error_test 00:07:54.816 ************************************ 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.G472ESx6Se 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61133 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61133 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61133 ']' 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.816 09:52:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.816 [2024-10-21 09:52:31.337600] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:54.816 [2024-10-21 09:52:31.337801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61133 ] 00:07:55.075 [2024-10-21 09:52:31.499278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.075 [2024-10-21 09:52:31.645323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.334 [2024-10-21 09:52:31.895376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.334 [2024-10-21 09:52:31.895444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.592 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.592 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:55.592 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:55.592 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:55.592 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.592 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.851 BaseBdev1_malloc 00:07:55.851 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.851 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:55.851 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.851 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.851 true 00:07:55.851 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.851 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:55.851 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.851 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.851 [2024-10-21 09:52:32.231528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:55.852 [2024-10-21 09:52:32.231608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.852 [2024-10-21 09:52:32.231625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:55.852 [2024-10-21 09:52:32.231640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.852 [2024-10-21 09:52:32.233884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.852 [2024-10-21 09:52:32.233996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:55.852 BaseBdev1 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.852 BaseBdev2_malloc 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.852 true 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.852 [2024-10-21 09:52:32.304304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:55.852 [2024-10-21 09:52:32.304356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.852 [2024-10-21 09:52:32.304372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:55.852 [2024-10-21 09:52:32.304383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.852 [2024-10-21 09:52:32.306682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.852 [2024-10-21 09:52:32.306719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:55.852 BaseBdev2 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.852 [2024-10-21 09:52:32.316346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.852 [2024-10-21 09:52:32.318497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.852 [2024-10-21 09:52:32.318702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:55.852 [2024-10-21 09:52:32.318718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.852 [2024-10-21 09:52:32.318942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:07:55.852 [2024-10-21 09:52:32.319119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:55.852 [2024-10-21 09:52:32.319129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:55.852 [2024-10-21 09:52:32.319286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.852 "name": "raid_bdev1", 00:07:55.852 "uuid": "577b4d57-7db5-4081-957e-097fefdb29eb", 00:07:55.852 "strip_size_kb": 64, 00:07:55.852 "state": "online", 00:07:55.852 "raid_level": "raid0", 00:07:55.852 "superblock": true, 00:07:55.852 "num_base_bdevs": 2, 00:07:55.852 "num_base_bdevs_discovered": 2, 00:07:55.852 "num_base_bdevs_operational": 2, 00:07:55.852 "base_bdevs_list": [ 00:07:55.852 { 00:07:55.852 "name": "BaseBdev1", 00:07:55.852 "uuid": "946eed11-b9dc-54f8-870b-46cbe5acde6b", 00:07:55.852 "is_configured": true, 00:07:55.852 "data_offset": 2048, 00:07:55.852 "data_size": 63488 00:07:55.852 }, 00:07:55.852 { 00:07:55.852 "name": "BaseBdev2", 00:07:55.852 "uuid": "8b801d71-6dbc-5f0f-aff9-1ee161049d4e", 00:07:55.852 "is_configured": true, 00:07:55.852 "data_offset": 2048, 00:07:55.852 "data_size": 63488 00:07:55.852 } 00:07:55.852 ] 00:07:55.852 }' 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.852 09:52:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.420 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:56.420 09:52:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:56.420 [2024-10-21 09:52:32.829314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.355 "name": "raid_bdev1", 00:07:57.355 "uuid": "577b4d57-7db5-4081-957e-097fefdb29eb", 00:07:57.355 "strip_size_kb": 64, 00:07:57.355 "state": "online", 00:07:57.355 "raid_level": "raid0", 00:07:57.355 "superblock": true, 00:07:57.355 "num_base_bdevs": 2, 00:07:57.355 "num_base_bdevs_discovered": 2, 00:07:57.355 "num_base_bdevs_operational": 2, 00:07:57.355 "base_bdevs_list": [ 00:07:57.355 { 00:07:57.355 "name": "BaseBdev1", 00:07:57.355 "uuid": "946eed11-b9dc-54f8-870b-46cbe5acde6b", 00:07:57.355 "is_configured": true, 00:07:57.355 "data_offset": 2048, 00:07:57.355 "data_size": 63488 00:07:57.355 }, 00:07:57.355 { 00:07:57.355 "name": "BaseBdev2", 00:07:57.355 "uuid": "8b801d71-6dbc-5f0f-aff9-1ee161049d4e", 00:07:57.355 "is_configured": true, 00:07:57.355 "data_offset": 2048, 00:07:57.355 "data_size": 63488 00:07:57.355 } 00:07:57.355 ] 00:07:57.355 }' 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.355 09:52:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.614 09:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.614 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.614 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.614 [2024-10-21 09:52:34.185684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.614 [2024-10-21 09:52:34.185832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.614 [2024-10-21 09:52:34.188456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.614 [2024-10-21 09:52:34.188543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.614 [2024-10-21 09:52:34.188609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.614 [2024-10-21 09:52:34.188658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:57.614 { 00:07:57.614 "results": [ 00:07:57.614 { 00:07:57.614 "job": "raid_bdev1", 00:07:57.614 "core_mask": "0x1", 00:07:57.614 "workload": "randrw", 00:07:57.614 "percentage": 50, 00:07:57.614 "status": "finished", 00:07:57.614 "queue_depth": 1, 00:07:57.614 "io_size": 131072, 00:07:57.614 "runtime": 1.356995, 00:07:57.614 "iops": 14639.700219971333, 00:07:57.614 "mibps": 1829.9625274964167, 00:07:57.614 "io_failed": 1, 00:07:57.614 "io_timeout": 0, 00:07:57.614 "avg_latency_us": 95.92719163221449, 00:07:57.614 "min_latency_us": 25.4882096069869, 00:07:57.614 "max_latency_us": 1345.0620087336245 00:07:57.614 } 00:07:57.614 ], 00:07:57.614 "core_count": 1 00:07:57.614 } 00:07:57.614 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.614 09:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61133 00:07:57.614 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61133 ']' 00:07:57.614 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61133 00:07:57.614 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:57.614 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.614 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61133 00:07:57.873 killing process with pid 61133 00:07:57.873 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.873 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.873 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61133' 00:07:57.873 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61133 00:07:57.873 [2024-10-21 09:52:34.228160] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.873 09:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61133 00:07:57.873 [2024-10-21 09:52:34.375422] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.250 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.G472ESx6Se 00:07:59.250 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:59.250 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:59.250 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:59.250 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:59.250 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.250 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.250 09:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:59.250 00:07:59.250 real 0m4.406s 00:07:59.250 user 0m5.147s 00:07:59.250 sys 0m0.594s 00:07:59.250 09:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.250 09:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.250 ************************************ 00:07:59.250 END TEST raid_write_error_test 00:07:59.250 ************************************ 00:07:59.250 09:52:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:59.250 09:52:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:59.251 09:52:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:59.251 09:52:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.251 09:52:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.251 ************************************ 00:07:59.251 START TEST raid_state_function_test 00:07:59.251 ************************************ 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61277 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:59.251 Process raid pid: 61277 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61277' 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61277 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61277 ']' 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.251 09:52:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.251 [2024-10-21 09:52:35.805706] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:07:59.251 [2024-10-21 09:52:35.805883] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.510 [2024-10-21 09:52:35.955339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.510 [2024-10-21 09:52:36.095727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.769 [2024-10-21 09:52:36.358332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.769 [2024-10-21 09:52:36.358494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.337 [2024-10-21 09:52:36.647094] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.337 [2024-10-21 09:52:36.647254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.337 [2024-10-21 09:52:36.647269] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.337 [2024-10-21 09:52:36.647280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.337 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.337 "name": "Existed_Raid", 00:08:00.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.338 "strip_size_kb": 64, 00:08:00.338 "state": "configuring", 00:08:00.338 "raid_level": "concat", 00:08:00.338 "superblock": false, 00:08:00.338 "num_base_bdevs": 2, 00:08:00.338 "num_base_bdevs_discovered": 0, 00:08:00.338 "num_base_bdevs_operational": 2, 00:08:00.338 "base_bdevs_list": [ 00:08:00.338 { 00:08:00.338 "name": "BaseBdev1", 00:08:00.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.338 "is_configured": false, 00:08:00.338 "data_offset": 0, 00:08:00.338 "data_size": 0 00:08:00.338 }, 00:08:00.338 { 00:08:00.338 "name": "BaseBdev2", 00:08:00.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.338 "is_configured": false, 00:08:00.338 "data_offset": 0, 00:08:00.338 "data_size": 0 00:08:00.338 } 00:08:00.338 ] 00:08:00.338 }' 00:08:00.338 09:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.338 09:52:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.596 [2024-10-21 09:52:37.090711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.596 [2024-10-21 09:52:37.090867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.596 [2024-10-21 09:52:37.102317] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.596 [2024-10-21 09:52:37.102432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.596 [2024-10-21 09:52:37.102459] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.596 [2024-10-21 09:52:37.102485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.596 [2024-10-21 09:52:37.159371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.596 BaseBdev1 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.596 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.596 [ 00:08:00.596 { 00:08:00.596 "name": "BaseBdev1", 00:08:00.596 "aliases": [ 00:08:00.596 "fb40ca01-995f-4abd-8393-65f971cabf35" 00:08:00.596 ], 00:08:00.596 "product_name": "Malloc disk", 00:08:00.596 "block_size": 512, 00:08:00.596 "num_blocks": 65536, 00:08:00.596 "uuid": "fb40ca01-995f-4abd-8393-65f971cabf35", 00:08:00.596 "assigned_rate_limits": { 00:08:00.596 "rw_ios_per_sec": 0, 00:08:00.596 "rw_mbytes_per_sec": 0, 00:08:00.596 "r_mbytes_per_sec": 0, 00:08:00.596 "w_mbytes_per_sec": 0 00:08:00.596 }, 00:08:00.596 "claimed": true, 00:08:00.596 "claim_type": "exclusive_write", 00:08:00.596 "zoned": false, 00:08:00.596 "supported_io_types": { 00:08:00.596 "read": true, 00:08:00.596 "write": true, 00:08:00.596 "unmap": true, 00:08:00.596 "flush": true, 00:08:00.596 "reset": true, 00:08:00.596 "nvme_admin": false, 00:08:00.596 "nvme_io": false, 00:08:00.596 "nvme_io_md": false, 00:08:00.596 "write_zeroes": true, 00:08:00.855 "zcopy": true, 00:08:00.855 "get_zone_info": false, 00:08:00.855 "zone_management": false, 00:08:00.855 "zone_append": false, 00:08:00.855 "compare": false, 00:08:00.855 "compare_and_write": false, 00:08:00.855 "abort": true, 00:08:00.855 "seek_hole": false, 00:08:00.855 "seek_data": false, 00:08:00.855 "copy": true, 00:08:00.855 "nvme_iov_md": false 00:08:00.855 }, 00:08:00.855 "memory_domains": [ 00:08:00.855 { 00:08:00.855 "dma_device_id": "system", 00:08:00.855 "dma_device_type": 1 00:08:00.855 }, 00:08:00.855 { 00:08:00.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.855 "dma_device_type": 2 00:08:00.855 } 00:08:00.855 ], 00:08:00.855 "driver_specific": {} 00:08:00.855 } 00:08:00.855 ] 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.855 "name": "Existed_Raid", 00:08:00.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.855 "strip_size_kb": 64, 00:08:00.855 "state": "configuring", 00:08:00.855 "raid_level": "concat", 00:08:00.855 "superblock": false, 00:08:00.855 "num_base_bdevs": 2, 00:08:00.855 "num_base_bdevs_discovered": 1, 00:08:00.855 "num_base_bdevs_operational": 2, 00:08:00.855 "base_bdevs_list": [ 00:08:00.855 { 00:08:00.855 "name": "BaseBdev1", 00:08:00.855 "uuid": "fb40ca01-995f-4abd-8393-65f971cabf35", 00:08:00.855 "is_configured": true, 00:08:00.855 "data_offset": 0, 00:08:00.855 "data_size": 65536 00:08:00.855 }, 00:08:00.855 { 00:08:00.855 "name": "BaseBdev2", 00:08:00.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.855 "is_configured": false, 00:08:00.855 "data_offset": 0, 00:08:00.855 "data_size": 0 00:08:00.855 } 00:08:00.855 ] 00:08:00.855 }' 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.855 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.115 [2024-10-21 09:52:37.582723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.115 [2024-10-21 09:52:37.582802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.115 [2024-10-21 09:52:37.594732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.115 [2024-10-21 09:52:37.596993] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.115 [2024-10-21 09:52:37.597079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.115 "name": "Existed_Raid", 00:08:01.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.115 "strip_size_kb": 64, 00:08:01.115 "state": "configuring", 00:08:01.115 "raid_level": "concat", 00:08:01.115 "superblock": false, 00:08:01.115 "num_base_bdevs": 2, 00:08:01.115 "num_base_bdevs_discovered": 1, 00:08:01.115 "num_base_bdevs_operational": 2, 00:08:01.115 "base_bdevs_list": [ 00:08:01.115 { 00:08:01.115 "name": "BaseBdev1", 00:08:01.115 "uuid": "fb40ca01-995f-4abd-8393-65f971cabf35", 00:08:01.115 "is_configured": true, 00:08:01.115 "data_offset": 0, 00:08:01.115 "data_size": 65536 00:08:01.115 }, 00:08:01.115 { 00:08:01.115 "name": "BaseBdev2", 00:08:01.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.115 "is_configured": false, 00:08:01.115 "data_offset": 0, 00:08:01.115 "data_size": 0 00:08:01.115 } 00:08:01.115 ] 00:08:01.115 }' 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.115 09:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.690 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.690 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.690 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 [2024-10-21 09:52:38.102216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.691 [2024-10-21 09:52:38.102385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:01.691 [2024-10-21 09:52:38.102400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:01.691 [2024-10-21 09:52:38.102743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:08:01.691 [2024-10-21 09:52:38.102941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:01.691 [2024-10-21 09:52:38.102955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:08:01.691 [2024-10-21 09:52:38.103247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.691 BaseBdev2 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 [ 00:08:01.691 { 00:08:01.691 "name": "BaseBdev2", 00:08:01.691 "aliases": [ 00:08:01.691 "6950b68f-53ea-400e-98d8-bd57058ac1d5" 00:08:01.691 ], 00:08:01.691 "product_name": "Malloc disk", 00:08:01.691 "block_size": 512, 00:08:01.691 "num_blocks": 65536, 00:08:01.691 "uuid": "6950b68f-53ea-400e-98d8-bd57058ac1d5", 00:08:01.691 "assigned_rate_limits": { 00:08:01.691 "rw_ios_per_sec": 0, 00:08:01.691 "rw_mbytes_per_sec": 0, 00:08:01.691 "r_mbytes_per_sec": 0, 00:08:01.691 "w_mbytes_per_sec": 0 00:08:01.691 }, 00:08:01.691 "claimed": true, 00:08:01.691 "claim_type": "exclusive_write", 00:08:01.691 "zoned": false, 00:08:01.691 "supported_io_types": { 00:08:01.691 "read": true, 00:08:01.691 "write": true, 00:08:01.691 "unmap": true, 00:08:01.691 "flush": true, 00:08:01.691 "reset": true, 00:08:01.691 "nvme_admin": false, 00:08:01.691 "nvme_io": false, 00:08:01.691 "nvme_io_md": false, 00:08:01.691 "write_zeroes": true, 00:08:01.691 "zcopy": true, 00:08:01.691 "get_zone_info": false, 00:08:01.691 "zone_management": false, 00:08:01.691 "zone_append": false, 00:08:01.691 "compare": false, 00:08:01.691 "compare_and_write": false, 00:08:01.691 "abort": true, 00:08:01.691 "seek_hole": false, 00:08:01.691 "seek_data": false, 00:08:01.691 "copy": true, 00:08:01.691 "nvme_iov_md": false 00:08:01.691 }, 00:08:01.691 "memory_domains": [ 00:08:01.691 { 00:08:01.691 "dma_device_id": "system", 00:08:01.691 "dma_device_type": 1 00:08:01.691 }, 00:08:01.691 { 00:08:01.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.691 "dma_device_type": 2 00:08:01.691 } 00:08:01.691 ], 00:08:01.691 "driver_specific": {} 00:08:01.691 } 00:08:01.691 ] 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.691 "name": "Existed_Raid", 00:08:01.691 "uuid": "f0e68e99-7df7-421e-bf17-ee6b7f461451", 00:08:01.691 "strip_size_kb": 64, 00:08:01.691 "state": "online", 00:08:01.691 "raid_level": "concat", 00:08:01.691 "superblock": false, 00:08:01.691 "num_base_bdevs": 2, 00:08:01.691 "num_base_bdevs_discovered": 2, 00:08:01.691 "num_base_bdevs_operational": 2, 00:08:01.691 "base_bdevs_list": [ 00:08:01.691 { 00:08:01.691 "name": "BaseBdev1", 00:08:01.691 "uuid": "fb40ca01-995f-4abd-8393-65f971cabf35", 00:08:01.691 "is_configured": true, 00:08:01.691 "data_offset": 0, 00:08:01.691 "data_size": 65536 00:08:01.691 }, 00:08:01.691 { 00:08:01.691 "name": "BaseBdev2", 00:08:01.691 "uuid": "6950b68f-53ea-400e-98d8-bd57058ac1d5", 00:08:01.691 "is_configured": true, 00:08:01.691 "data_offset": 0, 00:08:01.691 "data_size": 65536 00:08:01.691 } 00:08:01.691 ] 00:08:01.691 }' 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.691 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.273 [2024-10-21 09:52:38.597781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.273 "name": "Existed_Raid", 00:08:02.273 "aliases": [ 00:08:02.273 "f0e68e99-7df7-421e-bf17-ee6b7f461451" 00:08:02.273 ], 00:08:02.273 "product_name": "Raid Volume", 00:08:02.273 "block_size": 512, 00:08:02.273 "num_blocks": 131072, 00:08:02.273 "uuid": "f0e68e99-7df7-421e-bf17-ee6b7f461451", 00:08:02.273 "assigned_rate_limits": { 00:08:02.273 "rw_ios_per_sec": 0, 00:08:02.273 "rw_mbytes_per_sec": 0, 00:08:02.273 "r_mbytes_per_sec": 0, 00:08:02.273 "w_mbytes_per_sec": 0 00:08:02.273 }, 00:08:02.273 "claimed": false, 00:08:02.273 "zoned": false, 00:08:02.273 "supported_io_types": { 00:08:02.273 "read": true, 00:08:02.273 "write": true, 00:08:02.273 "unmap": true, 00:08:02.273 "flush": true, 00:08:02.273 "reset": true, 00:08:02.273 "nvme_admin": false, 00:08:02.273 "nvme_io": false, 00:08:02.273 "nvme_io_md": false, 00:08:02.273 "write_zeroes": true, 00:08:02.273 "zcopy": false, 00:08:02.273 "get_zone_info": false, 00:08:02.273 "zone_management": false, 00:08:02.273 "zone_append": false, 00:08:02.273 "compare": false, 00:08:02.273 "compare_and_write": false, 00:08:02.273 "abort": false, 00:08:02.273 "seek_hole": false, 00:08:02.273 "seek_data": false, 00:08:02.273 "copy": false, 00:08:02.273 "nvme_iov_md": false 00:08:02.273 }, 00:08:02.273 "memory_domains": [ 00:08:02.273 { 00:08:02.273 "dma_device_id": "system", 00:08:02.273 "dma_device_type": 1 00:08:02.273 }, 00:08:02.273 { 00:08:02.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.273 "dma_device_type": 2 00:08:02.273 }, 00:08:02.273 { 00:08:02.273 "dma_device_id": "system", 00:08:02.273 "dma_device_type": 1 00:08:02.273 }, 00:08:02.273 { 00:08:02.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.273 "dma_device_type": 2 00:08:02.273 } 00:08:02.273 ], 00:08:02.273 "driver_specific": { 00:08:02.273 "raid": { 00:08:02.273 "uuid": "f0e68e99-7df7-421e-bf17-ee6b7f461451", 00:08:02.273 "strip_size_kb": 64, 00:08:02.273 "state": "online", 00:08:02.273 "raid_level": "concat", 00:08:02.273 "superblock": false, 00:08:02.273 "num_base_bdevs": 2, 00:08:02.273 "num_base_bdevs_discovered": 2, 00:08:02.273 "num_base_bdevs_operational": 2, 00:08:02.273 "base_bdevs_list": [ 00:08:02.273 { 00:08:02.273 "name": "BaseBdev1", 00:08:02.273 "uuid": "fb40ca01-995f-4abd-8393-65f971cabf35", 00:08:02.273 "is_configured": true, 00:08:02.273 "data_offset": 0, 00:08:02.273 "data_size": 65536 00:08:02.273 }, 00:08:02.273 { 00:08:02.273 "name": "BaseBdev2", 00:08:02.273 "uuid": "6950b68f-53ea-400e-98d8-bd57058ac1d5", 00:08:02.273 "is_configured": true, 00:08:02.273 "data_offset": 0, 00:08:02.273 "data_size": 65536 00:08:02.273 } 00:08:02.273 ] 00:08:02.273 } 00:08:02.273 } 00:08:02.273 }' 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:02.273 BaseBdev2' 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.273 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.273 [2024-10-21 09:52:38.829108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:02.273 [2024-10-21 09:52:38.829158] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.273 [2024-10-21 09:52:38.829224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.533 "name": "Existed_Raid", 00:08:02.533 "uuid": "f0e68e99-7df7-421e-bf17-ee6b7f461451", 00:08:02.533 "strip_size_kb": 64, 00:08:02.533 "state": "offline", 00:08:02.533 "raid_level": "concat", 00:08:02.533 "superblock": false, 00:08:02.533 "num_base_bdevs": 2, 00:08:02.533 "num_base_bdevs_discovered": 1, 00:08:02.533 "num_base_bdevs_operational": 1, 00:08:02.533 "base_bdevs_list": [ 00:08:02.533 { 00:08:02.533 "name": null, 00:08:02.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.533 "is_configured": false, 00:08:02.533 "data_offset": 0, 00:08:02.533 "data_size": 65536 00:08:02.533 }, 00:08:02.533 { 00:08:02.533 "name": "BaseBdev2", 00:08:02.533 "uuid": "6950b68f-53ea-400e-98d8-bd57058ac1d5", 00:08:02.533 "is_configured": true, 00:08:02.533 "data_offset": 0, 00:08:02.533 "data_size": 65536 00:08:02.533 } 00:08:02.533 ] 00:08:02.533 }' 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.533 09:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.792 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:02.792 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:02.792 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.792 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:02.792 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.792 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.792 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.050 [2024-10-21 09:52:39.422702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:03.050 [2024-10-21 09:52:39.422856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61277 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61277 ']' 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61277 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61277 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61277' 00:08:03.050 killing process with pid 61277 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61277 00:08:03.050 [2024-10-21 09:52:39.622738] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.050 09:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61277 00:08:03.050 [2024-10-21 09:52:39.640218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.428 00:08:04.428 real 0m5.154s 00:08:04.428 user 0m7.266s 00:08:04.428 sys 0m0.897s 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.428 ************************************ 00:08:04.428 END TEST raid_state_function_test 00:08:04.428 ************************************ 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.428 09:52:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:04.428 09:52:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:04.428 09:52:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.428 09:52:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.428 ************************************ 00:08:04.428 START TEST raid_state_function_test_sb 00:08:04.428 ************************************ 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61530 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61530' 00:08:04.428 Process raid pid: 61530 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61530 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61530 ']' 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.428 09:52:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.688 [2024-10-21 09:52:41.024598] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:04.688 [2024-10-21 09:52:41.024705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.688 [2024-10-21 09:52:41.188165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.947 [2024-10-21 09:52:41.336404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.207 [2024-10-21 09:52:41.601184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.207 [2024-10-21 09:52:41.601228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.467 [2024-10-21 09:52:41.862000] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.467 [2024-10-21 09:52:41.862074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.467 [2024-10-21 09:52:41.862085] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.467 [2024-10-21 09:52:41.862095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.467 09:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.468 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.468 "name": "Existed_Raid", 00:08:05.468 "uuid": "7ed1861c-a6d3-422e-b7b5-6242daf17143", 00:08:05.468 "strip_size_kb": 64, 00:08:05.468 "state": "configuring", 00:08:05.468 "raid_level": "concat", 00:08:05.468 "superblock": true, 00:08:05.468 "num_base_bdevs": 2, 00:08:05.468 "num_base_bdevs_discovered": 0, 00:08:05.468 "num_base_bdevs_operational": 2, 00:08:05.468 "base_bdevs_list": [ 00:08:05.468 { 00:08:05.468 "name": "BaseBdev1", 00:08:05.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.468 "is_configured": false, 00:08:05.468 "data_offset": 0, 00:08:05.468 "data_size": 0 00:08:05.468 }, 00:08:05.468 { 00:08:05.468 "name": "BaseBdev2", 00:08:05.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.468 "is_configured": false, 00:08:05.468 "data_offset": 0, 00:08:05.468 "data_size": 0 00:08:05.468 } 00:08:05.468 ] 00:08:05.468 }' 00:08:05.468 09:52:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.468 09:52:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.727 [2024-10-21 09:52:42.285448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.727 [2024-10-21 09:52:42.285616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.727 [2024-10-21 09:52:42.293451] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.727 [2024-10-21 09:52:42.293574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.727 [2024-10-21 09:52:42.293605] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.727 [2024-10-21 09:52:42.293632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.727 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.987 [2024-10-21 09:52:42.348556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.987 BaseBdev1 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.987 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.988 [ 00:08:05.988 { 00:08:05.988 "name": "BaseBdev1", 00:08:05.988 "aliases": [ 00:08:05.988 "989c9003-8326-4d69-93df-4a3cb8282f19" 00:08:05.988 ], 00:08:05.988 "product_name": "Malloc disk", 00:08:05.988 "block_size": 512, 00:08:05.988 "num_blocks": 65536, 00:08:05.988 "uuid": "989c9003-8326-4d69-93df-4a3cb8282f19", 00:08:05.988 "assigned_rate_limits": { 00:08:05.988 "rw_ios_per_sec": 0, 00:08:05.988 "rw_mbytes_per_sec": 0, 00:08:05.988 "r_mbytes_per_sec": 0, 00:08:05.988 "w_mbytes_per_sec": 0 00:08:05.988 }, 00:08:05.988 "claimed": true, 00:08:05.988 "claim_type": "exclusive_write", 00:08:05.988 "zoned": false, 00:08:05.988 "supported_io_types": { 00:08:05.988 "read": true, 00:08:05.988 "write": true, 00:08:05.988 "unmap": true, 00:08:05.988 "flush": true, 00:08:05.988 "reset": true, 00:08:05.988 "nvme_admin": false, 00:08:05.988 "nvme_io": false, 00:08:05.988 "nvme_io_md": false, 00:08:05.988 "write_zeroes": true, 00:08:05.988 "zcopy": true, 00:08:05.988 "get_zone_info": false, 00:08:05.988 "zone_management": false, 00:08:05.988 "zone_append": false, 00:08:05.988 "compare": false, 00:08:05.988 "compare_and_write": false, 00:08:05.988 "abort": true, 00:08:05.988 "seek_hole": false, 00:08:05.988 "seek_data": false, 00:08:05.988 "copy": true, 00:08:05.988 "nvme_iov_md": false 00:08:05.988 }, 00:08:05.988 "memory_domains": [ 00:08:05.988 { 00:08:05.988 "dma_device_id": "system", 00:08:05.988 "dma_device_type": 1 00:08:05.988 }, 00:08:05.988 { 00:08:05.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.988 "dma_device_type": 2 00:08:05.988 } 00:08:05.988 ], 00:08:05.988 "driver_specific": {} 00:08:05.988 } 00:08:05.988 ] 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.988 "name": "Existed_Raid", 00:08:05.988 "uuid": "f872b176-f496-47d9-8052-f0ebe05bff19", 00:08:05.988 "strip_size_kb": 64, 00:08:05.988 "state": "configuring", 00:08:05.988 "raid_level": "concat", 00:08:05.988 "superblock": true, 00:08:05.988 "num_base_bdevs": 2, 00:08:05.988 "num_base_bdevs_discovered": 1, 00:08:05.988 "num_base_bdevs_operational": 2, 00:08:05.988 "base_bdevs_list": [ 00:08:05.988 { 00:08:05.988 "name": "BaseBdev1", 00:08:05.988 "uuid": "989c9003-8326-4d69-93df-4a3cb8282f19", 00:08:05.988 "is_configured": true, 00:08:05.988 "data_offset": 2048, 00:08:05.988 "data_size": 63488 00:08:05.988 }, 00:08:05.988 { 00:08:05.988 "name": "BaseBdev2", 00:08:05.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.988 "is_configured": false, 00:08:05.988 "data_offset": 0, 00:08:05.988 "data_size": 0 00:08:05.988 } 00:08:05.988 ] 00:08:05.988 }' 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.988 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.247 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.247 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.247 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.506 [2024-10-21 09:52:42.843765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.506 [2024-10-21 09:52:42.843949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.506 [2024-10-21 09:52:42.855795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.506 [2024-10-21 09:52:42.857923] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.506 [2024-10-21 09:52:42.857970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.506 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.506 "name": "Existed_Raid", 00:08:06.506 "uuid": "dd88de78-6e73-4d6e-a427-1b32d44ca8f6", 00:08:06.506 "strip_size_kb": 64, 00:08:06.506 "state": "configuring", 00:08:06.506 "raid_level": "concat", 00:08:06.506 "superblock": true, 00:08:06.506 "num_base_bdevs": 2, 00:08:06.507 "num_base_bdevs_discovered": 1, 00:08:06.507 "num_base_bdevs_operational": 2, 00:08:06.507 "base_bdevs_list": [ 00:08:06.507 { 00:08:06.507 "name": "BaseBdev1", 00:08:06.507 "uuid": "989c9003-8326-4d69-93df-4a3cb8282f19", 00:08:06.507 "is_configured": true, 00:08:06.507 "data_offset": 2048, 00:08:06.507 "data_size": 63488 00:08:06.507 }, 00:08:06.507 { 00:08:06.507 "name": "BaseBdev2", 00:08:06.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.507 "is_configured": false, 00:08:06.507 "data_offset": 0, 00:08:06.507 "data_size": 0 00:08:06.507 } 00:08:06.507 ] 00:08:06.507 }' 00:08:06.507 09:52:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.507 09:52:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.767 [2024-10-21 09:52:43.270280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.767 [2024-10-21 09:52:43.270689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:06.767 [2024-10-21 09:52:43.270744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:06.767 [2024-10-21 09:52:43.271054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:08:06.767 [2024-10-21 09:52:43.271258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:06.767 BaseBdev2 00:08:06.767 [2024-10-21 09:52:43.271305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:08:06.767 [2024-10-21 09:52:43.271474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.767 [ 00:08:06.767 { 00:08:06.767 "name": "BaseBdev2", 00:08:06.767 "aliases": [ 00:08:06.767 "57bc1799-4cf9-4732-b171-eb65f7203d0a" 00:08:06.767 ], 00:08:06.767 "product_name": "Malloc disk", 00:08:06.767 "block_size": 512, 00:08:06.767 "num_blocks": 65536, 00:08:06.767 "uuid": "57bc1799-4cf9-4732-b171-eb65f7203d0a", 00:08:06.767 "assigned_rate_limits": { 00:08:06.767 "rw_ios_per_sec": 0, 00:08:06.767 "rw_mbytes_per_sec": 0, 00:08:06.767 "r_mbytes_per_sec": 0, 00:08:06.767 "w_mbytes_per_sec": 0 00:08:06.767 }, 00:08:06.767 "claimed": true, 00:08:06.767 "claim_type": "exclusive_write", 00:08:06.767 "zoned": false, 00:08:06.767 "supported_io_types": { 00:08:06.767 "read": true, 00:08:06.767 "write": true, 00:08:06.767 "unmap": true, 00:08:06.767 "flush": true, 00:08:06.767 "reset": true, 00:08:06.767 "nvme_admin": false, 00:08:06.767 "nvme_io": false, 00:08:06.767 "nvme_io_md": false, 00:08:06.767 "write_zeroes": true, 00:08:06.767 "zcopy": true, 00:08:06.767 "get_zone_info": false, 00:08:06.767 "zone_management": false, 00:08:06.767 "zone_append": false, 00:08:06.767 "compare": false, 00:08:06.767 "compare_and_write": false, 00:08:06.767 "abort": true, 00:08:06.767 "seek_hole": false, 00:08:06.767 "seek_data": false, 00:08:06.767 "copy": true, 00:08:06.767 "nvme_iov_md": false 00:08:06.767 }, 00:08:06.767 "memory_domains": [ 00:08:06.767 { 00:08:06.767 "dma_device_id": "system", 00:08:06.767 "dma_device_type": 1 00:08:06.767 }, 00:08:06.767 { 00:08:06.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.767 "dma_device_type": 2 00:08:06.767 } 00:08:06.767 ], 00:08:06.767 "driver_specific": {} 00:08:06.767 } 00:08:06.767 ] 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.767 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.027 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.027 "name": "Existed_Raid", 00:08:07.027 "uuid": "dd88de78-6e73-4d6e-a427-1b32d44ca8f6", 00:08:07.027 "strip_size_kb": 64, 00:08:07.027 "state": "online", 00:08:07.027 "raid_level": "concat", 00:08:07.027 "superblock": true, 00:08:07.027 "num_base_bdevs": 2, 00:08:07.027 "num_base_bdevs_discovered": 2, 00:08:07.027 "num_base_bdevs_operational": 2, 00:08:07.027 "base_bdevs_list": [ 00:08:07.027 { 00:08:07.027 "name": "BaseBdev1", 00:08:07.027 "uuid": "989c9003-8326-4d69-93df-4a3cb8282f19", 00:08:07.028 "is_configured": true, 00:08:07.028 "data_offset": 2048, 00:08:07.028 "data_size": 63488 00:08:07.028 }, 00:08:07.028 { 00:08:07.028 "name": "BaseBdev2", 00:08:07.028 "uuid": "57bc1799-4cf9-4732-b171-eb65f7203d0a", 00:08:07.028 "is_configured": true, 00:08:07.028 "data_offset": 2048, 00:08:07.028 "data_size": 63488 00:08:07.028 } 00:08:07.028 ] 00:08:07.028 }' 00:08:07.028 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.028 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.288 [2024-10-21 09:52:43.717890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.288 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:07.288 "name": "Existed_Raid", 00:08:07.288 "aliases": [ 00:08:07.288 "dd88de78-6e73-4d6e-a427-1b32d44ca8f6" 00:08:07.288 ], 00:08:07.288 "product_name": "Raid Volume", 00:08:07.288 "block_size": 512, 00:08:07.288 "num_blocks": 126976, 00:08:07.288 "uuid": "dd88de78-6e73-4d6e-a427-1b32d44ca8f6", 00:08:07.288 "assigned_rate_limits": { 00:08:07.288 "rw_ios_per_sec": 0, 00:08:07.288 "rw_mbytes_per_sec": 0, 00:08:07.288 "r_mbytes_per_sec": 0, 00:08:07.288 "w_mbytes_per_sec": 0 00:08:07.288 }, 00:08:07.288 "claimed": false, 00:08:07.288 "zoned": false, 00:08:07.288 "supported_io_types": { 00:08:07.288 "read": true, 00:08:07.288 "write": true, 00:08:07.288 "unmap": true, 00:08:07.288 "flush": true, 00:08:07.288 "reset": true, 00:08:07.288 "nvme_admin": false, 00:08:07.288 "nvme_io": false, 00:08:07.288 "nvme_io_md": false, 00:08:07.288 "write_zeroes": true, 00:08:07.288 "zcopy": false, 00:08:07.288 "get_zone_info": false, 00:08:07.288 "zone_management": false, 00:08:07.288 "zone_append": false, 00:08:07.288 "compare": false, 00:08:07.288 "compare_and_write": false, 00:08:07.288 "abort": false, 00:08:07.288 "seek_hole": false, 00:08:07.288 "seek_data": false, 00:08:07.289 "copy": false, 00:08:07.289 "nvme_iov_md": false 00:08:07.289 }, 00:08:07.289 "memory_domains": [ 00:08:07.289 { 00:08:07.289 "dma_device_id": "system", 00:08:07.289 "dma_device_type": 1 00:08:07.289 }, 00:08:07.289 { 00:08:07.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.289 "dma_device_type": 2 00:08:07.289 }, 00:08:07.289 { 00:08:07.289 "dma_device_id": "system", 00:08:07.289 "dma_device_type": 1 00:08:07.289 }, 00:08:07.289 { 00:08:07.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.289 "dma_device_type": 2 00:08:07.289 } 00:08:07.289 ], 00:08:07.289 "driver_specific": { 00:08:07.289 "raid": { 00:08:07.289 "uuid": "dd88de78-6e73-4d6e-a427-1b32d44ca8f6", 00:08:07.289 "strip_size_kb": 64, 00:08:07.289 "state": "online", 00:08:07.289 "raid_level": "concat", 00:08:07.289 "superblock": true, 00:08:07.289 "num_base_bdevs": 2, 00:08:07.289 "num_base_bdevs_discovered": 2, 00:08:07.289 "num_base_bdevs_operational": 2, 00:08:07.289 "base_bdevs_list": [ 00:08:07.289 { 00:08:07.289 "name": "BaseBdev1", 00:08:07.289 "uuid": "989c9003-8326-4d69-93df-4a3cb8282f19", 00:08:07.289 "is_configured": true, 00:08:07.289 "data_offset": 2048, 00:08:07.289 "data_size": 63488 00:08:07.289 }, 00:08:07.289 { 00:08:07.289 "name": "BaseBdev2", 00:08:07.289 "uuid": "57bc1799-4cf9-4732-b171-eb65f7203d0a", 00:08:07.289 "is_configured": true, 00:08:07.289 "data_offset": 2048, 00:08:07.289 "data_size": 63488 00:08:07.289 } 00:08:07.289 ] 00:08:07.289 } 00:08:07.289 } 00:08:07.289 }' 00:08:07.289 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.289 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:07.289 BaseBdev2' 00:08:07.289 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.289 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.289 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.289 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.289 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:07.289 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.289 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.289 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.549 09:52:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.549 [2024-10-21 09:52:43.929272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:07.549 [2024-10-21 09:52:43.929418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.549 [2024-10-21 09:52:43.929495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.549 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.550 "name": "Existed_Raid", 00:08:07.550 "uuid": "dd88de78-6e73-4d6e-a427-1b32d44ca8f6", 00:08:07.550 "strip_size_kb": 64, 00:08:07.550 "state": "offline", 00:08:07.550 "raid_level": "concat", 00:08:07.550 "superblock": true, 00:08:07.550 "num_base_bdevs": 2, 00:08:07.550 "num_base_bdevs_discovered": 1, 00:08:07.550 "num_base_bdevs_operational": 1, 00:08:07.550 "base_bdevs_list": [ 00:08:07.550 { 00:08:07.550 "name": null, 00:08:07.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.550 "is_configured": false, 00:08:07.550 "data_offset": 0, 00:08:07.550 "data_size": 63488 00:08:07.550 }, 00:08:07.550 { 00:08:07.550 "name": "BaseBdev2", 00:08:07.550 "uuid": "57bc1799-4cf9-4732-b171-eb65f7203d0a", 00:08:07.550 "is_configured": true, 00:08:07.550 "data_offset": 2048, 00:08:07.550 "data_size": 63488 00:08:07.550 } 00:08:07.550 ] 00:08:07.550 }' 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.550 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.119 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:08.119 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.119 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.119 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.119 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.119 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.119 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.120 [2024-10-21 09:52:44.527964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.120 [2024-10-21 09:52:44.528108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61530 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61530 ']' 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61530 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.120 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61530 00:08:08.379 killing process with pid 61530 00:08:08.379 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.379 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.379 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61530' 00:08:08.379 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61530 00:08:08.379 [2024-10-21 09:52:44.726212] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.379 09:52:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61530 00:08:08.379 [2024-10-21 09:52:44.745336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.757 09:52:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:09.757 00:08:09.757 real 0m5.036s 00:08:09.757 user 0m7.078s 00:08:09.757 sys 0m0.886s 00:08:09.757 ************************************ 00:08:09.757 END TEST raid_state_function_test_sb 00:08:09.757 ************************************ 00:08:09.757 09:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.757 09:52:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.757 09:52:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:09.757 09:52:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:09.757 09:52:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.757 09:52:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.757 ************************************ 00:08:09.757 START TEST raid_superblock_test 00:08:09.757 ************************************ 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:09.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61782 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61782 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61782 ']' 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.757 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.757 [2024-10-21 09:52:46.122121] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:09.757 [2024-10-21 09:52:46.122733] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61782 ] 00:08:09.757 [2024-10-21 09:52:46.282767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.016 [2024-10-21 09:52:46.430994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.274 [2024-10-21 09:52:46.688089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.274 [2024-10-21 09:52:46.688251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.542 09:52:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.542 malloc1 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.542 [2024-10-21 09:52:47.011788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:10.542 [2024-10-21 09:52:47.011967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.542 [2024-10-21 09:52:47.012012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:08:10.542 [2024-10-21 09:52:47.012046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.542 [2024-10-21 09:52:47.014404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.542 [2024-10-21 09:52:47.014475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:10.542 pt1 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.542 malloc2 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.542 [2024-10-21 09:52:47.076756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:10.542 [2024-10-21 09:52:47.076874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.542 [2024-10-21 09:52:47.076913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:08:10.542 [2024-10-21 09:52:47.076937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.542 [2024-10-21 09:52:47.079197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.542 [2024-10-21 09:52:47.079266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:10.542 pt2 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.542 [2024-10-21 09:52:47.088820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:10.542 [2024-10-21 09:52:47.090819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:10.542 [2024-10-21 09:52:47.091030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:08:10.542 [2024-10-21 09:52:47.091074] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:10.542 [2024-10-21 09:52:47.091347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:08:10.542 [2024-10-21 09:52:47.091535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:08:10.542 [2024-10-21 09:52:47.091589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:08:10.542 [2024-10-21 09:52:47.091751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.542 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.817 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.817 "name": "raid_bdev1", 00:08:10.817 "uuid": "564a9ebc-f7fd-4fdb-8edd-10c309f2257c", 00:08:10.817 "strip_size_kb": 64, 00:08:10.817 "state": "online", 00:08:10.817 "raid_level": "concat", 00:08:10.817 "superblock": true, 00:08:10.817 "num_base_bdevs": 2, 00:08:10.817 "num_base_bdevs_discovered": 2, 00:08:10.817 "num_base_bdevs_operational": 2, 00:08:10.817 "base_bdevs_list": [ 00:08:10.817 { 00:08:10.817 "name": "pt1", 00:08:10.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.817 "is_configured": true, 00:08:10.817 "data_offset": 2048, 00:08:10.817 "data_size": 63488 00:08:10.817 }, 00:08:10.817 { 00:08:10.817 "name": "pt2", 00:08:10.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.817 "is_configured": true, 00:08:10.817 "data_offset": 2048, 00:08:10.817 "data_size": 63488 00:08:10.817 } 00:08:10.817 ] 00:08:10.817 }' 00:08:10.817 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.817 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.077 [2024-10-21 09:52:47.556334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.077 "name": "raid_bdev1", 00:08:11.077 "aliases": [ 00:08:11.077 "564a9ebc-f7fd-4fdb-8edd-10c309f2257c" 00:08:11.077 ], 00:08:11.077 "product_name": "Raid Volume", 00:08:11.077 "block_size": 512, 00:08:11.077 "num_blocks": 126976, 00:08:11.077 "uuid": "564a9ebc-f7fd-4fdb-8edd-10c309f2257c", 00:08:11.077 "assigned_rate_limits": { 00:08:11.077 "rw_ios_per_sec": 0, 00:08:11.077 "rw_mbytes_per_sec": 0, 00:08:11.077 "r_mbytes_per_sec": 0, 00:08:11.077 "w_mbytes_per_sec": 0 00:08:11.077 }, 00:08:11.077 "claimed": false, 00:08:11.077 "zoned": false, 00:08:11.077 "supported_io_types": { 00:08:11.077 "read": true, 00:08:11.077 "write": true, 00:08:11.077 "unmap": true, 00:08:11.077 "flush": true, 00:08:11.077 "reset": true, 00:08:11.077 "nvme_admin": false, 00:08:11.077 "nvme_io": false, 00:08:11.077 "nvme_io_md": false, 00:08:11.077 "write_zeroes": true, 00:08:11.077 "zcopy": false, 00:08:11.077 "get_zone_info": false, 00:08:11.077 "zone_management": false, 00:08:11.077 "zone_append": false, 00:08:11.077 "compare": false, 00:08:11.077 "compare_and_write": false, 00:08:11.077 "abort": false, 00:08:11.077 "seek_hole": false, 00:08:11.077 "seek_data": false, 00:08:11.077 "copy": false, 00:08:11.077 "nvme_iov_md": false 00:08:11.077 }, 00:08:11.077 "memory_domains": [ 00:08:11.077 { 00:08:11.077 "dma_device_id": "system", 00:08:11.077 "dma_device_type": 1 00:08:11.077 }, 00:08:11.077 { 00:08:11.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.077 "dma_device_type": 2 00:08:11.077 }, 00:08:11.077 { 00:08:11.077 "dma_device_id": "system", 00:08:11.077 "dma_device_type": 1 00:08:11.077 }, 00:08:11.077 { 00:08:11.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.077 "dma_device_type": 2 00:08:11.077 } 00:08:11.077 ], 00:08:11.077 "driver_specific": { 00:08:11.077 "raid": { 00:08:11.077 "uuid": "564a9ebc-f7fd-4fdb-8edd-10c309f2257c", 00:08:11.077 "strip_size_kb": 64, 00:08:11.077 "state": "online", 00:08:11.077 "raid_level": "concat", 00:08:11.077 "superblock": true, 00:08:11.077 "num_base_bdevs": 2, 00:08:11.077 "num_base_bdevs_discovered": 2, 00:08:11.077 "num_base_bdevs_operational": 2, 00:08:11.077 "base_bdevs_list": [ 00:08:11.077 { 00:08:11.077 "name": "pt1", 00:08:11.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.077 "is_configured": true, 00:08:11.077 "data_offset": 2048, 00:08:11.077 "data_size": 63488 00:08:11.077 }, 00:08:11.077 { 00:08:11.077 "name": "pt2", 00:08:11.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.077 "is_configured": true, 00:08:11.077 "data_offset": 2048, 00:08:11.077 "data_size": 63488 00:08:11.077 } 00:08:11.077 ] 00:08:11.077 } 00:08:11.077 } 00:08:11.077 }' 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:11.077 pt2' 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.077 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:11.078 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.078 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.078 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.336 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.336 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.336 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.336 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.336 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:11.337 [2024-10-21 09:52:47.763818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=564a9ebc-f7fd-4fdb-8edd-10c309f2257c 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 564a9ebc-f7fd-4fdb-8edd-10c309f2257c ']' 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.337 [2024-10-21 09:52:47.815495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:11.337 [2024-10-21 09:52:47.815524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.337 [2024-10-21 09:52:47.815636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.337 [2024-10-21 09:52:47.815691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.337 [2024-10-21 09:52:47.815704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:11.337 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.597 [2024-10-21 09:52:47.951284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:11.597 [2024-10-21 09:52:47.953466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:11.597 [2024-10-21 09:52:47.953539] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:11.597 [2024-10-21 09:52:47.953603] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:11.597 [2024-10-21 09:52:47.953617] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:11.597 [2024-10-21 09:52:47.953627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:08:11.597 request: 00:08:11.597 { 00:08:11.597 "name": "raid_bdev1", 00:08:11.597 "raid_level": "concat", 00:08:11.597 "base_bdevs": [ 00:08:11.597 "malloc1", 00:08:11.597 "malloc2" 00:08:11.597 ], 00:08:11.597 "strip_size_kb": 64, 00:08:11.597 "superblock": false, 00:08:11.597 "method": "bdev_raid_create", 00:08:11.597 "req_id": 1 00:08:11.597 } 00:08:11.597 Got JSON-RPC error response 00:08:11.597 response: 00:08:11.597 { 00:08:11.597 "code": -17, 00:08:11.597 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:11.597 } 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.597 09:52:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.597 [2024-10-21 09:52:48.007140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:11.597 [2024-10-21 09:52:48.007263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.597 [2024-10-21 09:52:48.007300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:11.597 [2024-10-21 09:52:48.007333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.597 [2024-10-21 09:52:48.009763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.597 [2024-10-21 09:52:48.009835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:11.597 [2024-10-21 09:52:48.009929] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:11.597 [2024-10-21 09:52:48.010002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:11.597 pt1 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.597 "name": "raid_bdev1", 00:08:11.597 "uuid": "564a9ebc-f7fd-4fdb-8edd-10c309f2257c", 00:08:11.597 "strip_size_kb": 64, 00:08:11.597 "state": "configuring", 00:08:11.597 "raid_level": "concat", 00:08:11.597 "superblock": true, 00:08:11.597 "num_base_bdevs": 2, 00:08:11.597 "num_base_bdevs_discovered": 1, 00:08:11.597 "num_base_bdevs_operational": 2, 00:08:11.597 "base_bdevs_list": [ 00:08:11.597 { 00:08:11.597 "name": "pt1", 00:08:11.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.597 "is_configured": true, 00:08:11.597 "data_offset": 2048, 00:08:11.597 "data_size": 63488 00:08:11.597 }, 00:08:11.597 { 00:08:11.597 "name": null, 00:08:11.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.597 "is_configured": false, 00:08:11.597 "data_offset": 2048, 00:08:11.597 "data_size": 63488 00:08:11.597 } 00:08:11.597 ] 00:08:11.597 }' 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.597 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.857 [2024-10-21 09:52:48.386557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:11.857 [2024-10-21 09:52:48.386764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.857 [2024-10-21 09:52:48.386792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:08:11.857 [2024-10-21 09:52:48.386806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.857 [2024-10-21 09:52:48.387395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.857 [2024-10-21 09:52:48.387419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:11.857 [2024-10-21 09:52:48.387516] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:11.857 [2024-10-21 09:52:48.387545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:11.857 [2024-10-21 09:52:48.387709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:11.857 [2024-10-21 09:52:48.387729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:11.857 [2024-10-21 09:52:48.387993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:11.857 [2024-10-21 09:52:48.388149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:11.857 [2024-10-21 09:52:48.388158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:11.857 [2024-10-21 09:52:48.388309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.857 pt2 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.857 "name": "raid_bdev1", 00:08:11.857 "uuid": "564a9ebc-f7fd-4fdb-8edd-10c309f2257c", 00:08:11.857 "strip_size_kb": 64, 00:08:11.857 "state": "online", 00:08:11.857 "raid_level": "concat", 00:08:11.857 "superblock": true, 00:08:11.857 "num_base_bdevs": 2, 00:08:11.857 "num_base_bdevs_discovered": 2, 00:08:11.857 "num_base_bdevs_operational": 2, 00:08:11.857 "base_bdevs_list": [ 00:08:11.857 { 00:08:11.857 "name": "pt1", 00:08:11.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.857 "is_configured": true, 00:08:11.857 "data_offset": 2048, 00:08:11.857 "data_size": 63488 00:08:11.857 }, 00:08:11.857 { 00:08:11.857 "name": "pt2", 00:08:11.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.857 "is_configured": true, 00:08:11.857 "data_offset": 2048, 00:08:11.857 "data_size": 63488 00:08:11.857 } 00:08:11.857 ] 00:08:11.857 }' 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.857 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.424 [2024-10-21 09:52:48.842060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.424 "name": "raid_bdev1", 00:08:12.424 "aliases": [ 00:08:12.424 "564a9ebc-f7fd-4fdb-8edd-10c309f2257c" 00:08:12.424 ], 00:08:12.424 "product_name": "Raid Volume", 00:08:12.424 "block_size": 512, 00:08:12.424 "num_blocks": 126976, 00:08:12.424 "uuid": "564a9ebc-f7fd-4fdb-8edd-10c309f2257c", 00:08:12.424 "assigned_rate_limits": { 00:08:12.424 "rw_ios_per_sec": 0, 00:08:12.424 "rw_mbytes_per_sec": 0, 00:08:12.424 "r_mbytes_per_sec": 0, 00:08:12.424 "w_mbytes_per_sec": 0 00:08:12.424 }, 00:08:12.424 "claimed": false, 00:08:12.424 "zoned": false, 00:08:12.424 "supported_io_types": { 00:08:12.424 "read": true, 00:08:12.424 "write": true, 00:08:12.424 "unmap": true, 00:08:12.424 "flush": true, 00:08:12.424 "reset": true, 00:08:12.424 "nvme_admin": false, 00:08:12.424 "nvme_io": false, 00:08:12.424 "nvme_io_md": false, 00:08:12.424 "write_zeroes": true, 00:08:12.424 "zcopy": false, 00:08:12.424 "get_zone_info": false, 00:08:12.424 "zone_management": false, 00:08:12.424 "zone_append": false, 00:08:12.424 "compare": false, 00:08:12.424 "compare_and_write": false, 00:08:12.424 "abort": false, 00:08:12.424 "seek_hole": false, 00:08:12.424 "seek_data": false, 00:08:12.424 "copy": false, 00:08:12.424 "nvme_iov_md": false 00:08:12.424 }, 00:08:12.424 "memory_domains": [ 00:08:12.424 { 00:08:12.424 "dma_device_id": "system", 00:08:12.424 "dma_device_type": 1 00:08:12.424 }, 00:08:12.424 { 00:08:12.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.424 "dma_device_type": 2 00:08:12.424 }, 00:08:12.424 { 00:08:12.424 "dma_device_id": "system", 00:08:12.424 "dma_device_type": 1 00:08:12.424 }, 00:08:12.424 { 00:08:12.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.424 "dma_device_type": 2 00:08:12.424 } 00:08:12.424 ], 00:08:12.424 "driver_specific": { 00:08:12.424 "raid": { 00:08:12.424 "uuid": "564a9ebc-f7fd-4fdb-8edd-10c309f2257c", 00:08:12.424 "strip_size_kb": 64, 00:08:12.424 "state": "online", 00:08:12.424 "raid_level": "concat", 00:08:12.424 "superblock": true, 00:08:12.424 "num_base_bdevs": 2, 00:08:12.424 "num_base_bdevs_discovered": 2, 00:08:12.424 "num_base_bdevs_operational": 2, 00:08:12.424 "base_bdevs_list": [ 00:08:12.424 { 00:08:12.424 "name": "pt1", 00:08:12.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.424 "is_configured": true, 00:08:12.424 "data_offset": 2048, 00:08:12.424 "data_size": 63488 00:08:12.424 }, 00:08:12.424 { 00:08:12.424 "name": "pt2", 00:08:12.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.424 "is_configured": true, 00:08:12.424 "data_offset": 2048, 00:08:12.424 "data_size": 63488 00:08:12.424 } 00:08:12.424 ] 00:08:12.424 } 00:08:12.424 } 00:08:12.424 }' 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:12.424 pt2' 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.424 09:52:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.683 [2024-10-21 09:52:49.081598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 564a9ebc-f7fd-4fdb-8edd-10c309f2257c '!=' 564a9ebc-f7fd-4fdb-8edd-10c309f2257c ']' 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61782 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61782 ']' 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61782 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61782 00:08:12.683 killing process with pid 61782 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61782' 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61782 00:08:12.683 [2024-10-21 09:52:49.170723] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.683 [2024-10-21 09:52:49.170831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.683 [2024-10-21 09:52:49.170885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.683 09:52:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61782 00:08:12.683 [2024-10-21 09:52:49.170897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:12.943 [2024-10-21 09:52:49.397182] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.321 09:52:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:14.321 00:08:14.321 real 0m4.590s 00:08:14.321 user 0m6.260s 00:08:14.321 sys 0m0.799s 00:08:14.321 09:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.321 09:52:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.321 ************************************ 00:08:14.321 END TEST raid_superblock_test 00:08:14.321 ************************************ 00:08:14.321 09:52:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:14.321 09:52:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:14.321 09:52:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.321 09:52:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.321 ************************************ 00:08:14.321 START TEST raid_read_error_test 00:08:14.321 ************************************ 00:08:14.321 09:52:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:08:14.321 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:14.321 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:14.321 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:14.321 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:14.321 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.321 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UNuhbUvHqW 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61988 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61988 00:08:14.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61988 ']' 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.322 09:52:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.322 [2024-10-21 09:52:50.799519] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:14.322 [2024-10-21 09:52:50.799650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61988 ] 00:08:14.580 [2024-10-21 09:52:50.964833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.580 [2024-10-21 09:52:51.107818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.839 [2024-10-21 09:52:51.353371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.839 [2024-10-21 09:52:51.353427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.097 BaseBdev1_malloc 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.097 true 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.097 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.097 [2024-10-21 09:52:51.687747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:15.097 [2024-10-21 09:52:51.687816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.097 [2024-10-21 09:52:51.687834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:15.097 [2024-10-21 09:52:51.687849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.097 [2024-10-21 09:52:51.690182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.097 [2024-10-21 09:52:51.690296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:15.357 BaseBdev1 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.357 BaseBdev2_malloc 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.357 true 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.357 [2024-10-21 09:52:51.763448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:15.357 [2024-10-21 09:52:51.763516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.357 [2024-10-21 09:52:51.763535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:15.357 [2024-10-21 09:52:51.763547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.357 [2024-10-21 09:52:51.765899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.357 [2024-10-21 09:52:51.765936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:15.357 BaseBdev2 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.357 [2024-10-21 09:52:51.775491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.357 [2024-10-21 09:52:51.777640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.357 [2024-10-21 09:52:51.777829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:15.357 [2024-10-21 09:52:51.777861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:15.357 [2024-10-21 09:52:51.778089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:08:15.357 [2024-10-21 09:52:51.778275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:15.357 [2024-10-21 09:52:51.778285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:15.357 [2024-10-21 09:52:51.778443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.357 "name": "raid_bdev1", 00:08:15.357 "uuid": "5296cadc-c774-442f-95f5-20538ed047ba", 00:08:15.357 "strip_size_kb": 64, 00:08:15.357 "state": "online", 00:08:15.357 "raid_level": "concat", 00:08:15.357 "superblock": true, 00:08:15.357 "num_base_bdevs": 2, 00:08:15.357 "num_base_bdevs_discovered": 2, 00:08:15.357 "num_base_bdevs_operational": 2, 00:08:15.357 "base_bdevs_list": [ 00:08:15.357 { 00:08:15.357 "name": "BaseBdev1", 00:08:15.357 "uuid": "ff9f2293-49eb-5104-8d10-dc63f7d07f45", 00:08:15.357 "is_configured": true, 00:08:15.357 "data_offset": 2048, 00:08:15.357 "data_size": 63488 00:08:15.357 }, 00:08:15.357 { 00:08:15.357 "name": "BaseBdev2", 00:08:15.357 "uuid": "b3125723-f681-5fec-8e15-77772568da7b", 00:08:15.357 "is_configured": true, 00:08:15.357 "data_offset": 2048, 00:08:15.357 "data_size": 63488 00:08:15.357 } 00:08:15.357 ] 00:08:15.357 }' 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.357 09:52:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.924 09:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:15.924 09:52:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:15.924 [2024-10-21 09:52:52.307947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.862 "name": "raid_bdev1", 00:08:16.862 "uuid": "5296cadc-c774-442f-95f5-20538ed047ba", 00:08:16.862 "strip_size_kb": 64, 00:08:16.862 "state": "online", 00:08:16.862 "raid_level": "concat", 00:08:16.862 "superblock": true, 00:08:16.862 "num_base_bdevs": 2, 00:08:16.862 "num_base_bdevs_discovered": 2, 00:08:16.862 "num_base_bdevs_operational": 2, 00:08:16.862 "base_bdevs_list": [ 00:08:16.862 { 00:08:16.862 "name": "BaseBdev1", 00:08:16.862 "uuid": "ff9f2293-49eb-5104-8d10-dc63f7d07f45", 00:08:16.862 "is_configured": true, 00:08:16.862 "data_offset": 2048, 00:08:16.862 "data_size": 63488 00:08:16.862 }, 00:08:16.862 { 00:08:16.862 "name": "BaseBdev2", 00:08:16.862 "uuid": "b3125723-f681-5fec-8e15-77772568da7b", 00:08:16.862 "is_configured": true, 00:08:16.862 "data_offset": 2048, 00:08:16.862 "data_size": 63488 00:08:16.862 } 00:08:16.862 ] 00:08:16.862 }' 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.862 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.121 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:17.121 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.121 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.121 [2024-10-21 09:52:53.700320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:17.121 [2024-10-21 09:52:53.700477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.121 [2024-10-21 09:52:53.703142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.121 [2024-10-21 09:52:53.703232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.121 [2024-10-21 09:52:53.703288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.121 [2024-10-21 09:52:53.703333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:17.121 { 00:08:17.121 "results": [ 00:08:17.121 { 00:08:17.121 "job": "raid_bdev1", 00:08:17.121 "core_mask": "0x1", 00:08:17.121 "workload": "randrw", 00:08:17.121 "percentage": 50, 00:08:17.121 "status": "finished", 00:08:17.121 "queue_depth": 1, 00:08:17.121 "io_size": 131072, 00:08:17.121 "runtime": 1.393213, 00:08:17.121 "iops": 14912.292664510021, 00:08:17.121 "mibps": 1864.0365830637527, 00:08:17.121 "io_failed": 1, 00:08:17.121 "io_timeout": 0, 00:08:17.121 "avg_latency_us": 94.11377991241156, 00:08:17.121 "min_latency_us": 24.370305676855896, 00:08:17.121 "max_latency_us": 1323.598253275109 00:08:17.121 } 00:08:17.121 ], 00:08:17.122 "core_count": 1 00:08:17.122 } 00:08:17.122 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.122 09:52:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61988 00:08:17.122 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61988 ']' 00:08:17.122 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61988 00:08:17.122 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:17.122 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.122 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61988 00:08:17.380 killing process with pid 61988 00:08:17.380 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.380 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.380 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61988' 00:08:17.380 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61988 00:08:17.381 [2024-10-21 09:52:53.747934] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.381 09:52:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61988 00:08:17.381 [2024-10-21 09:52:53.895445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.785 09:52:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UNuhbUvHqW 00:08:18.785 09:52:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:18.785 09:52:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:18.785 09:52:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:18.785 09:52:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:18.785 09:52:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.785 09:52:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:18.785 09:52:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:18.785 00:08:18.785 real 0m4.492s 00:08:18.785 user 0m5.286s 00:08:18.785 sys 0m0.617s 00:08:18.785 09:52:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.785 ************************************ 00:08:18.785 END TEST raid_read_error_test 00:08:18.785 ************************************ 00:08:18.785 09:52:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.785 09:52:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:18.785 09:52:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:18.785 09:52:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.785 09:52:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.785 ************************************ 00:08:18.785 START TEST raid_write_error_test 00:08:18.785 ************************************ 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TEfvkfwR1q 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62139 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62139 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62139 ']' 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.785 09:52:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.785 [2024-10-21 09:52:55.355891] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:18.785 [2024-10-21 09:52:55.356092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62139 ] 00:08:19.044 [2024-10-21 09:52:55.519123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.302 [2024-10-21 09:52:55.660999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.562 [2024-10-21 09:52:55.907408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.562 [2024-10-21 09:52:55.907581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.821 BaseBdev1_malloc 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.821 true 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.821 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.821 [2024-10-21 09:52:56.259409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:19.821 [2024-10-21 09:52:56.259547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.821 [2024-10-21 09:52:56.259590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:19.822 [2024-10-21 09:52:56.259606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.822 [2024-10-21 09:52:56.261897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.822 [2024-10-21 09:52:56.261946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:19.822 BaseBdev1 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.822 BaseBdev2_malloc 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.822 true 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.822 [2024-10-21 09:52:56.333666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:19.822 [2024-10-21 09:52:56.333724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.822 [2024-10-21 09:52:56.333740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:19.822 [2024-10-21 09:52:56.333751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.822 [2024-10-21 09:52:56.336049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.822 [2024-10-21 09:52:56.336085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:19.822 BaseBdev2 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.822 [2024-10-21 09:52:56.345725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.822 [2024-10-21 09:52:56.347738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.822 [2024-10-21 09:52:56.347923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:19.822 [2024-10-21 09:52:56.347938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:19.822 [2024-10-21 09:52:56.348156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:08:19.822 [2024-10-21 09:52:56.348338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:19.822 [2024-10-21 09:52:56.348348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:19.822 [2024-10-21 09:52:56.348496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.822 "name": "raid_bdev1", 00:08:19.822 "uuid": "182e5511-43b5-4800-9307-2b06a08268e5", 00:08:19.822 "strip_size_kb": 64, 00:08:19.822 "state": "online", 00:08:19.822 "raid_level": "concat", 00:08:19.822 "superblock": true, 00:08:19.822 "num_base_bdevs": 2, 00:08:19.822 "num_base_bdevs_discovered": 2, 00:08:19.822 "num_base_bdevs_operational": 2, 00:08:19.822 "base_bdevs_list": [ 00:08:19.822 { 00:08:19.822 "name": "BaseBdev1", 00:08:19.822 "uuid": "be10d715-43e7-5f18-b9b6-71a9b1868bd4", 00:08:19.822 "is_configured": true, 00:08:19.822 "data_offset": 2048, 00:08:19.822 "data_size": 63488 00:08:19.822 }, 00:08:19.822 { 00:08:19.822 "name": "BaseBdev2", 00:08:19.822 "uuid": "8c2bbab6-8270-5bb6-a6aa-8c4d552f489e", 00:08:19.822 "is_configured": true, 00:08:19.822 "data_offset": 2048, 00:08:19.822 "data_size": 63488 00:08:19.822 } 00:08:19.822 ] 00:08:19.822 }' 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.822 09:52:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.389 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:20.389 09:52:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:20.389 [2024-10-21 09:52:56.898106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.325 "name": "raid_bdev1", 00:08:21.325 "uuid": "182e5511-43b5-4800-9307-2b06a08268e5", 00:08:21.325 "strip_size_kb": 64, 00:08:21.325 "state": "online", 00:08:21.325 "raid_level": "concat", 00:08:21.325 "superblock": true, 00:08:21.325 "num_base_bdevs": 2, 00:08:21.325 "num_base_bdevs_discovered": 2, 00:08:21.325 "num_base_bdevs_operational": 2, 00:08:21.325 "base_bdevs_list": [ 00:08:21.325 { 00:08:21.325 "name": "BaseBdev1", 00:08:21.325 "uuid": "be10d715-43e7-5f18-b9b6-71a9b1868bd4", 00:08:21.325 "is_configured": true, 00:08:21.325 "data_offset": 2048, 00:08:21.325 "data_size": 63488 00:08:21.325 }, 00:08:21.325 { 00:08:21.325 "name": "BaseBdev2", 00:08:21.325 "uuid": "8c2bbab6-8270-5bb6-a6aa-8c4d552f489e", 00:08:21.325 "is_configured": true, 00:08:21.325 "data_offset": 2048, 00:08:21.325 "data_size": 63488 00:08:21.325 } 00:08:21.325 ] 00:08:21.325 }' 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.325 09:52:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.893 [2024-10-21 09:52:58.274140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.893 [2024-10-21 09:52:58.274191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.893 [2024-10-21 09:52:58.276721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.893 [2024-10-21 09:52:58.276776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.893 [2024-10-21 09:52:58.276810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.893 [2024-10-21 09:52:58.276823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:21.893 { 00:08:21.893 "results": [ 00:08:21.893 { 00:08:21.893 "job": "raid_bdev1", 00:08:21.893 "core_mask": "0x1", 00:08:21.893 "workload": "randrw", 00:08:21.893 "percentage": 50, 00:08:21.893 "status": "finished", 00:08:21.893 "queue_depth": 1, 00:08:21.893 "io_size": 131072, 00:08:21.893 "runtime": 1.376677, 00:08:21.893 "iops": 15198.19100631448, 00:08:21.893 "mibps": 1899.77387578931, 00:08:21.893 "io_failed": 1, 00:08:21.893 "io_timeout": 0, 00:08:21.893 "avg_latency_us": 92.32052852535982, 00:08:21.893 "min_latency_us": 24.482096069868994, 00:08:21.893 "max_latency_us": 1273.5161572052402 00:08:21.893 } 00:08:21.893 ], 00:08:21.893 "core_count": 1 00:08:21.893 } 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62139 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62139 ']' 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62139 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62139 00:08:21.893 killing process with pid 62139 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62139' 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62139 00:08:21.893 [2024-10-21 09:52:58.320859] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.893 09:52:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62139 00:08:21.893 [2024-10-21 09:52:58.460557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.273 09:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TEfvkfwR1q 00:08:23.273 09:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:23.273 09:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:23.273 09:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:23.273 09:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:23.273 09:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.273 09:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.273 09:52:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:23.273 ************************************ 00:08:23.273 END TEST raid_write_error_test 00:08:23.273 ************************************ 00:08:23.273 00:08:23.273 real 0m4.450s 00:08:23.273 user 0m5.249s 00:08:23.273 sys 0m0.629s 00:08:23.273 09:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.273 09:52:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.273 09:52:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:23.273 09:52:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:23.273 09:52:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:23.273 09:52:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.273 09:52:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.273 ************************************ 00:08:23.273 START TEST raid_state_function_test 00:08:23.273 ************************************ 00:08:23.273 09:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:08:23.273 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:23.273 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:23.273 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:23.273 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:23.273 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:23.273 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.273 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62277 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:23.274 Process raid pid: 62277 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62277' 00:08:23.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62277 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62277 ']' 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.274 09:52:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.274 [2024-10-21 09:52:59.865349] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:23.274 [2024-10-21 09:52:59.865547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.533 [2024-10-21 09:53:00.030303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.792 [2024-10-21 09:53:00.160096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.050 [2024-10-21 09:53:00.417665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.050 [2024-10-21 09:53:00.417832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.308 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.308 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:24.308 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:24.308 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.308 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.308 [2024-10-21 09:53:00.675609] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.308 [2024-10-21 09:53:00.675776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.308 [2024-10-21 09:53:00.675805] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.308 [2024-10-21 09:53:00.675827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.308 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.308 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:24.308 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.308 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.309 "name": "Existed_Raid", 00:08:24.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.309 "strip_size_kb": 0, 00:08:24.309 "state": "configuring", 00:08:24.309 "raid_level": "raid1", 00:08:24.309 "superblock": false, 00:08:24.309 "num_base_bdevs": 2, 00:08:24.309 "num_base_bdevs_discovered": 0, 00:08:24.309 "num_base_bdevs_operational": 2, 00:08:24.309 "base_bdevs_list": [ 00:08:24.309 { 00:08:24.309 "name": "BaseBdev1", 00:08:24.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.309 "is_configured": false, 00:08:24.309 "data_offset": 0, 00:08:24.309 "data_size": 0 00:08:24.309 }, 00:08:24.309 { 00:08:24.309 "name": "BaseBdev2", 00:08:24.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.309 "is_configured": false, 00:08:24.309 "data_offset": 0, 00:08:24.309 "data_size": 0 00:08:24.309 } 00:08:24.309 ] 00:08:24.309 }' 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.309 09:53:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.568 [2024-10-21 09:53:01.086785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.568 [2024-10-21 09:53:01.086870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.568 [2024-10-21 09:53:01.098790] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.568 [2024-10-21 09:53:01.098866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.568 [2024-10-21 09:53:01.098890] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.568 [2024-10-21 09:53:01.098914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.568 [2024-10-21 09:53:01.153855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.568 BaseBdev1 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.568 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.827 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.827 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.827 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.827 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.827 [ 00:08:24.827 { 00:08:24.827 "name": "BaseBdev1", 00:08:24.827 "aliases": [ 00:08:24.827 "db233ac0-8aa5-4ee9-a4d8-d55f364de1ed" 00:08:24.827 ], 00:08:24.827 "product_name": "Malloc disk", 00:08:24.827 "block_size": 512, 00:08:24.827 "num_blocks": 65536, 00:08:24.827 "uuid": "db233ac0-8aa5-4ee9-a4d8-d55f364de1ed", 00:08:24.827 "assigned_rate_limits": { 00:08:24.827 "rw_ios_per_sec": 0, 00:08:24.827 "rw_mbytes_per_sec": 0, 00:08:24.827 "r_mbytes_per_sec": 0, 00:08:24.827 "w_mbytes_per_sec": 0 00:08:24.827 }, 00:08:24.827 "claimed": true, 00:08:24.827 "claim_type": "exclusive_write", 00:08:24.827 "zoned": false, 00:08:24.827 "supported_io_types": { 00:08:24.827 "read": true, 00:08:24.827 "write": true, 00:08:24.827 "unmap": true, 00:08:24.827 "flush": true, 00:08:24.827 "reset": true, 00:08:24.827 "nvme_admin": false, 00:08:24.827 "nvme_io": false, 00:08:24.827 "nvme_io_md": false, 00:08:24.827 "write_zeroes": true, 00:08:24.827 "zcopy": true, 00:08:24.827 "get_zone_info": false, 00:08:24.827 "zone_management": false, 00:08:24.827 "zone_append": false, 00:08:24.827 "compare": false, 00:08:24.827 "compare_and_write": false, 00:08:24.827 "abort": true, 00:08:24.827 "seek_hole": false, 00:08:24.827 "seek_data": false, 00:08:24.827 "copy": true, 00:08:24.827 "nvme_iov_md": false 00:08:24.827 }, 00:08:24.827 "memory_domains": [ 00:08:24.827 { 00:08:24.827 "dma_device_id": "system", 00:08:24.827 "dma_device_type": 1 00:08:24.827 }, 00:08:24.827 { 00:08:24.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.827 "dma_device_type": 2 00:08:24.827 } 00:08:24.827 ], 00:08:24.827 "driver_specific": {} 00:08:24.827 } 00:08:24.827 ] 00:08:24.827 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.827 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.828 "name": "Existed_Raid", 00:08:24.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.828 "strip_size_kb": 0, 00:08:24.828 "state": "configuring", 00:08:24.828 "raid_level": "raid1", 00:08:24.828 "superblock": false, 00:08:24.828 "num_base_bdevs": 2, 00:08:24.828 "num_base_bdevs_discovered": 1, 00:08:24.828 "num_base_bdevs_operational": 2, 00:08:24.828 "base_bdevs_list": [ 00:08:24.828 { 00:08:24.828 "name": "BaseBdev1", 00:08:24.828 "uuid": "db233ac0-8aa5-4ee9-a4d8-d55f364de1ed", 00:08:24.828 "is_configured": true, 00:08:24.828 "data_offset": 0, 00:08:24.828 "data_size": 65536 00:08:24.828 }, 00:08:24.828 { 00:08:24.828 "name": "BaseBdev2", 00:08:24.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.828 "is_configured": false, 00:08:24.828 "data_offset": 0, 00:08:24.828 "data_size": 0 00:08:24.828 } 00:08:24.828 ] 00:08:24.828 }' 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.828 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.087 [2024-10-21 09:53:01.640999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.087 [2024-10-21 09:53:01.641110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.087 [2024-10-21 09:53:01.649038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.087 [2024-10-21 09:53:01.651110] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.087 [2024-10-21 09:53:01.651184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.087 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.347 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.347 "name": "Existed_Raid", 00:08:25.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.347 "strip_size_kb": 0, 00:08:25.347 "state": "configuring", 00:08:25.347 "raid_level": "raid1", 00:08:25.347 "superblock": false, 00:08:25.347 "num_base_bdevs": 2, 00:08:25.347 "num_base_bdevs_discovered": 1, 00:08:25.347 "num_base_bdevs_operational": 2, 00:08:25.347 "base_bdevs_list": [ 00:08:25.347 { 00:08:25.347 "name": "BaseBdev1", 00:08:25.347 "uuid": "db233ac0-8aa5-4ee9-a4d8-d55f364de1ed", 00:08:25.347 "is_configured": true, 00:08:25.347 "data_offset": 0, 00:08:25.347 "data_size": 65536 00:08:25.347 }, 00:08:25.347 { 00:08:25.347 "name": "BaseBdev2", 00:08:25.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.347 "is_configured": false, 00:08:25.347 "data_offset": 0, 00:08:25.347 "data_size": 0 00:08:25.347 } 00:08:25.347 ] 00:08:25.347 }' 00:08:25.347 09:53:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.347 09:53:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.605 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.605 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.605 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.605 [2024-10-21 09:53:02.121565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.605 [2024-10-21 09:53:02.121733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:25.605 [2024-10-21 09:53:02.121746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:25.606 [2024-10-21 09:53:02.122054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:08:25.606 [2024-10-21 09:53:02.122255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:25.606 [2024-10-21 09:53:02.122269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:08:25.606 [2024-10-21 09:53:02.122585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.606 BaseBdev2 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.606 [ 00:08:25.606 { 00:08:25.606 "name": "BaseBdev2", 00:08:25.606 "aliases": [ 00:08:25.606 "5cdb407c-d1af-434e-8364-e68bedc37342" 00:08:25.606 ], 00:08:25.606 "product_name": "Malloc disk", 00:08:25.606 "block_size": 512, 00:08:25.606 "num_blocks": 65536, 00:08:25.606 "uuid": "5cdb407c-d1af-434e-8364-e68bedc37342", 00:08:25.606 "assigned_rate_limits": { 00:08:25.606 "rw_ios_per_sec": 0, 00:08:25.606 "rw_mbytes_per_sec": 0, 00:08:25.606 "r_mbytes_per_sec": 0, 00:08:25.606 "w_mbytes_per_sec": 0 00:08:25.606 }, 00:08:25.606 "claimed": true, 00:08:25.606 "claim_type": "exclusive_write", 00:08:25.606 "zoned": false, 00:08:25.606 "supported_io_types": { 00:08:25.606 "read": true, 00:08:25.606 "write": true, 00:08:25.606 "unmap": true, 00:08:25.606 "flush": true, 00:08:25.606 "reset": true, 00:08:25.606 "nvme_admin": false, 00:08:25.606 "nvme_io": false, 00:08:25.606 "nvme_io_md": false, 00:08:25.606 "write_zeroes": true, 00:08:25.606 "zcopy": true, 00:08:25.606 "get_zone_info": false, 00:08:25.606 "zone_management": false, 00:08:25.606 "zone_append": false, 00:08:25.606 "compare": false, 00:08:25.606 "compare_and_write": false, 00:08:25.606 "abort": true, 00:08:25.606 "seek_hole": false, 00:08:25.606 "seek_data": false, 00:08:25.606 "copy": true, 00:08:25.606 "nvme_iov_md": false 00:08:25.606 }, 00:08:25.606 "memory_domains": [ 00:08:25.606 { 00:08:25.606 "dma_device_id": "system", 00:08:25.606 "dma_device_type": 1 00:08:25.606 }, 00:08:25.606 { 00:08:25.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.606 "dma_device_type": 2 00:08:25.606 } 00:08:25.606 ], 00:08:25.606 "driver_specific": {} 00:08:25.606 } 00:08:25.606 ] 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.606 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.875 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.875 "name": "Existed_Raid", 00:08:25.875 "uuid": "1d049777-982e-45ef-8f55-e811b5ede18b", 00:08:25.875 "strip_size_kb": 0, 00:08:25.875 "state": "online", 00:08:25.875 "raid_level": "raid1", 00:08:25.875 "superblock": false, 00:08:25.875 "num_base_bdevs": 2, 00:08:25.875 "num_base_bdevs_discovered": 2, 00:08:25.875 "num_base_bdevs_operational": 2, 00:08:25.875 "base_bdevs_list": [ 00:08:25.875 { 00:08:25.875 "name": "BaseBdev1", 00:08:25.875 "uuid": "db233ac0-8aa5-4ee9-a4d8-d55f364de1ed", 00:08:25.875 "is_configured": true, 00:08:25.875 "data_offset": 0, 00:08:25.875 "data_size": 65536 00:08:25.875 }, 00:08:25.875 { 00:08:25.875 "name": "BaseBdev2", 00:08:25.875 "uuid": "5cdb407c-d1af-434e-8364-e68bedc37342", 00:08:25.875 "is_configured": true, 00:08:25.875 "data_offset": 0, 00:08:25.875 "data_size": 65536 00:08:25.875 } 00:08:25.875 ] 00:08:25.875 }' 00:08:25.875 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.875 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.149 [2024-10-21 09:53:02.605058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.149 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.149 "name": "Existed_Raid", 00:08:26.149 "aliases": [ 00:08:26.149 "1d049777-982e-45ef-8f55-e811b5ede18b" 00:08:26.149 ], 00:08:26.149 "product_name": "Raid Volume", 00:08:26.149 "block_size": 512, 00:08:26.149 "num_blocks": 65536, 00:08:26.149 "uuid": "1d049777-982e-45ef-8f55-e811b5ede18b", 00:08:26.149 "assigned_rate_limits": { 00:08:26.149 "rw_ios_per_sec": 0, 00:08:26.149 "rw_mbytes_per_sec": 0, 00:08:26.149 "r_mbytes_per_sec": 0, 00:08:26.149 "w_mbytes_per_sec": 0 00:08:26.149 }, 00:08:26.149 "claimed": false, 00:08:26.149 "zoned": false, 00:08:26.149 "supported_io_types": { 00:08:26.149 "read": true, 00:08:26.149 "write": true, 00:08:26.149 "unmap": false, 00:08:26.149 "flush": false, 00:08:26.149 "reset": true, 00:08:26.149 "nvme_admin": false, 00:08:26.149 "nvme_io": false, 00:08:26.149 "nvme_io_md": false, 00:08:26.149 "write_zeroes": true, 00:08:26.149 "zcopy": false, 00:08:26.149 "get_zone_info": false, 00:08:26.149 "zone_management": false, 00:08:26.149 "zone_append": false, 00:08:26.149 "compare": false, 00:08:26.149 "compare_and_write": false, 00:08:26.149 "abort": false, 00:08:26.149 "seek_hole": false, 00:08:26.149 "seek_data": false, 00:08:26.149 "copy": false, 00:08:26.149 "nvme_iov_md": false 00:08:26.149 }, 00:08:26.149 "memory_domains": [ 00:08:26.149 { 00:08:26.149 "dma_device_id": "system", 00:08:26.149 "dma_device_type": 1 00:08:26.149 }, 00:08:26.149 { 00:08:26.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.149 "dma_device_type": 2 00:08:26.149 }, 00:08:26.149 { 00:08:26.149 "dma_device_id": "system", 00:08:26.149 "dma_device_type": 1 00:08:26.149 }, 00:08:26.149 { 00:08:26.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.149 "dma_device_type": 2 00:08:26.149 } 00:08:26.149 ], 00:08:26.149 "driver_specific": { 00:08:26.149 "raid": { 00:08:26.150 "uuid": "1d049777-982e-45ef-8f55-e811b5ede18b", 00:08:26.150 "strip_size_kb": 0, 00:08:26.150 "state": "online", 00:08:26.150 "raid_level": "raid1", 00:08:26.150 "superblock": false, 00:08:26.150 "num_base_bdevs": 2, 00:08:26.150 "num_base_bdevs_discovered": 2, 00:08:26.150 "num_base_bdevs_operational": 2, 00:08:26.150 "base_bdevs_list": [ 00:08:26.150 { 00:08:26.150 "name": "BaseBdev1", 00:08:26.150 "uuid": "db233ac0-8aa5-4ee9-a4d8-d55f364de1ed", 00:08:26.150 "is_configured": true, 00:08:26.150 "data_offset": 0, 00:08:26.150 "data_size": 65536 00:08:26.150 }, 00:08:26.150 { 00:08:26.150 "name": "BaseBdev2", 00:08:26.150 "uuid": "5cdb407c-d1af-434e-8364-e68bedc37342", 00:08:26.150 "is_configured": true, 00:08:26.150 "data_offset": 0, 00:08:26.150 "data_size": 65536 00:08:26.150 } 00:08:26.150 ] 00:08:26.150 } 00:08:26.150 } 00:08:26.150 }' 00:08:26.150 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.150 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:26.150 BaseBdev2' 00:08:26.150 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.150 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.150 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.150 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.150 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:26.150 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.150 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.409 [2024-10-21 09:53:02.804628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.409 "name": "Existed_Raid", 00:08:26.409 "uuid": "1d049777-982e-45ef-8f55-e811b5ede18b", 00:08:26.409 "strip_size_kb": 0, 00:08:26.409 "state": "online", 00:08:26.409 "raid_level": "raid1", 00:08:26.409 "superblock": false, 00:08:26.409 "num_base_bdevs": 2, 00:08:26.409 "num_base_bdevs_discovered": 1, 00:08:26.409 "num_base_bdevs_operational": 1, 00:08:26.409 "base_bdevs_list": [ 00:08:26.409 { 00:08:26.409 "name": null, 00:08:26.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.409 "is_configured": false, 00:08:26.409 "data_offset": 0, 00:08:26.409 "data_size": 65536 00:08:26.409 }, 00:08:26.409 { 00:08:26.409 "name": "BaseBdev2", 00:08:26.409 "uuid": "5cdb407c-d1af-434e-8364-e68bedc37342", 00:08:26.409 "is_configured": true, 00:08:26.409 "data_offset": 0, 00:08:26.409 "data_size": 65536 00:08:26.409 } 00:08:26.409 ] 00:08:26.409 }' 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.409 09:53:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.979 [2024-10-21 09:53:03.328532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.979 [2024-10-21 09:53:03.328667] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.979 [2024-10-21 09:53:03.433695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.979 [2024-10-21 09:53:03.433886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.979 [2024-10-21 09:53:03.433917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62277 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62277 ']' 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62277 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62277 00:08:26.979 killing process with pid 62277 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62277' 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62277 00:08:26.979 [2024-10-21 09:53:03.531640] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.979 09:53:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62277 00:08:26.979 [2024-10-21 09:53:03.549979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:28.357 00:08:28.357 real 0m4.977s 00:08:28.357 user 0m6.939s 00:08:28.357 sys 0m0.948s 00:08:28.357 ************************************ 00:08:28.357 END TEST raid_state_function_test 00:08:28.357 ************************************ 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.357 09:53:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:28.357 09:53:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:28.357 09:53:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.357 09:53:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.357 ************************************ 00:08:28.357 START TEST raid_state_function_test_sb 00:08:28.357 ************************************ 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:28.357 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62530 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62530' 00:08:28.358 Process raid pid: 62530 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62530 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62530 ']' 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.358 09:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.358 [2024-10-21 09:53:04.910419] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:28.358 [2024-10-21 09:53:04.910648] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.617 [2024-10-21 09:53:05.074645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.876 [2024-10-21 09:53:05.218334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.134 [2024-10-21 09:53:05.478965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.134 [2024-10-21 09:53:05.479108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.393 [2024-10-21 09:53:05.757311] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.393 [2024-10-21 09:53:05.757449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.393 [2024-10-21 09:53:05.757462] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.393 [2024-10-21 09:53:05.757472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.393 "name": "Existed_Raid", 00:08:29.393 "uuid": "4e0d3fdd-05d4-4f2e-8da5-718c34a74272", 00:08:29.393 "strip_size_kb": 0, 00:08:29.393 "state": "configuring", 00:08:29.393 "raid_level": "raid1", 00:08:29.393 "superblock": true, 00:08:29.393 "num_base_bdevs": 2, 00:08:29.393 "num_base_bdevs_discovered": 0, 00:08:29.393 "num_base_bdevs_operational": 2, 00:08:29.393 "base_bdevs_list": [ 00:08:29.393 { 00:08:29.393 "name": "BaseBdev1", 00:08:29.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.393 "is_configured": false, 00:08:29.393 "data_offset": 0, 00:08:29.393 "data_size": 0 00:08:29.393 }, 00:08:29.393 { 00:08:29.393 "name": "BaseBdev2", 00:08:29.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.393 "is_configured": false, 00:08:29.393 "data_offset": 0, 00:08:29.393 "data_size": 0 00:08:29.393 } 00:08:29.393 ] 00:08:29.393 }' 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.393 09:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.652 [2024-10-21 09:53:06.188484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.652 [2024-10-21 09:53:06.188530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.652 [2024-10-21 09:53:06.200520] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.652 [2024-10-21 09:53:06.200563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.652 [2024-10-21 09:53:06.200580] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.652 [2024-10-21 09:53:06.200608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.652 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.912 [2024-10-21 09:53:06.256571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.912 BaseBdev1 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.912 [ 00:08:29.912 { 00:08:29.912 "name": "BaseBdev1", 00:08:29.912 "aliases": [ 00:08:29.912 "4673e91f-f5d6-48ce-bf4d-4eee0ac42da8" 00:08:29.912 ], 00:08:29.912 "product_name": "Malloc disk", 00:08:29.912 "block_size": 512, 00:08:29.912 "num_blocks": 65536, 00:08:29.912 "uuid": "4673e91f-f5d6-48ce-bf4d-4eee0ac42da8", 00:08:29.912 "assigned_rate_limits": { 00:08:29.912 "rw_ios_per_sec": 0, 00:08:29.912 "rw_mbytes_per_sec": 0, 00:08:29.912 "r_mbytes_per_sec": 0, 00:08:29.912 "w_mbytes_per_sec": 0 00:08:29.912 }, 00:08:29.912 "claimed": true, 00:08:29.912 "claim_type": "exclusive_write", 00:08:29.912 "zoned": false, 00:08:29.912 "supported_io_types": { 00:08:29.912 "read": true, 00:08:29.912 "write": true, 00:08:29.912 "unmap": true, 00:08:29.912 "flush": true, 00:08:29.912 "reset": true, 00:08:29.912 "nvme_admin": false, 00:08:29.912 "nvme_io": false, 00:08:29.912 "nvme_io_md": false, 00:08:29.912 "write_zeroes": true, 00:08:29.912 "zcopy": true, 00:08:29.912 "get_zone_info": false, 00:08:29.912 "zone_management": false, 00:08:29.912 "zone_append": false, 00:08:29.912 "compare": false, 00:08:29.912 "compare_and_write": false, 00:08:29.912 "abort": true, 00:08:29.912 "seek_hole": false, 00:08:29.912 "seek_data": false, 00:08:29.912 "copy": true, 00:08:29.912 "nvme_iov_md": false 00:08:29.912 }, 00:08:29.912 "memory_domains": [ 00:08:29.912 { 00:08:29.912 "dma_device_id": "system", 00:08:29.912 "dma_device_type": 1 00:08:29.912 }, 00:08:29.912 { 00:08:29.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.912 "dma_device_type": 2 00:08:29.912 } 00:08:29.912 ], 00:08:29.912 "driver_specific": {} 00:08:29.912 } 00:08:29.912 ] 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.912 "name": "Existed_Raid", 00:08:29.912 "uuid": "0decd452-8f9d-45a4-a7f4-ab6c51a14c76", 00:08:29.912 "strip_size_kb": 0, 00:08:29.912 "state": "configuring", 00:08:29.912 "raid_level": "raid1", 00:08:29.912 "superblock": true, 00:08:29.912 "num_base_bdevs": 2, 00:08:29.912 "num_base_bdevs_discovered": 1, 00:08:29.912 "num_base_bdevs_operational": 2, 00:08:29.912 "base_bdevs_list": [ 00:08:29.912 { 00:08:29.912 "name": "BaseBdev1", 00:08:29.912 "uuid": "4673e91f-f5d6-48ce-bf4d-4eee0ac42da8", 00:08:29.912 "is_configured": true, 00:08:29.912 "data_offset": 2048, 00:08:29.912 "data_size": 63488 00:08:29.912 }, 00:08:29.912 { 00:08:29.912 "name": "BaseBdev2", 00:08:29.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.912 "is_configured": false, 00:08:29.912 "data_offset": 0, 00:08:29.912 "data_size": 0 00:08:29.912 } 00:08:29.912 ] 00:08:29.912 }' 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.912 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.172 [2024-10-21 09:53:06.727813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.172 [2024-10-21 09:53:06.727998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.172 [2024-10-21 09:53:06.735804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.172 [2024-10-21 09:53:06.737939] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.172 [2024-10-21 09:53:06.737987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.172 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.431 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.431 "name": "Existed_Raid", 00:08:30.431 "uuid": "1b26e1e1-0f02-4303-9c44-cffb72749ad7", 00:08:30.431 "strip_size_kb": 0, 00:08:30.431 "state": "configuring", 00:08:30.431 "raid_level": "raid1", 00:08:30.431 "superblock": true, 00:08:30.431 "num_base_bdevs": 2, 00:08:30.431 "num_base_bdevs_discovered": 1, 00:08:30.431 "num_base_bdevs_operational": 2, 00:08:30.431 "base_bdevs_list": [ 00:08:30.431 { 00:08:30.431 "name": "BaseBdev1", 00:08:30.431 "uuid": "4673e91f-f5d6-48ce-bf4d-4eee0ac42da8", 00:08:30.431 "is_configured": true, 00:08:30.431 "data_offset": 2048, 00:08:30.431 "data_size": 63488 00:08:30.431 }, 00:08:30.431 { 00:08:30.431 "name": "BaseBdev2", 00:08:30.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.431 "is_configured": false, 00:08:30.431 "data_offset": 0, 00:08:30.431 "data_size": 0 00:08:30.431 } 00:08:30.431 ] 00:08:30.431 }' 00:08:30.431 09:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.431 09:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.690 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:30.690 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.690 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.690 [2024-10-21 09:53:07.217468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.690 [2024-10-21 09:53:07.217891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:30.690 [2024-10-21 09:53:07.217946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:30.690 [2024-10-21 09:53:07.218255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:08:30.690 [2024-10-21 09:53:07.218473] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:30.690 BaseBdev2 00:08:30.691 [2024-10-21 09:53:07.218529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:08:30.691 [2024-10-21 09:53:07.218798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.691 [ 00:08:30.691 { 00:08:30.691 "name": "BaseBdev2", 00:08:30.691 "aliases": [ 00:08:30.691 "aba416a7-85cc-4df8-8fec-4e5bd41dad27" 00:08:30.691 ], 00:08:30.691 "product_name": "Malloc disk", 00:08:30.691 "block_size": 512, 00:08:30.691 "num_blocks": 65536, 00:08:30.691 "uuid": "aba416a7-85cc-4df8-8fec-4e5bd41dad27", 00:08:30.691 "assigned_rate_limits": { 00:08:30.691 "rw_ios_per_sec": 0, 00:08:30.691 "rw_mbytes_per_sec": 0, 00:08:30.691 "r_mbytes_per_sec": 0, 00:08:30.691 "w_mbytes_per_sec": 0 00:08:30.691 }, 00:08:30.691 "claimed": true, 00:08:30.691 "claim_type": "exclusive_write", 00:08:30.691 "zoned": false, 00:08:30.691 "supported_io_types": { 00:08:30.691 "read": true, 00:08:30.691 "write": true, 00:08:30.691 "unmap": true, 00:08:30.691 "flush": true, 00:08:30.691 "reset": true, 00:08:30.691 "nvme_admin": false, 00:08:30.691 "nvme_io": false, 00:08:30.691 "nvme_io_md": false, 00:08:30.691 "write_zeroes": true, 00:08:30.691 "zcopy": true, 00:08:30.691 "get_zone_info": false, 00:08:30.691 "zone_management": false, 00:08:30.691 "zone_append": false, 00:08:30.691 "compare": false, 00:08:30.691 "compare_and_write": false, 00:08:30.691 "abort": true, 00:08:30.691 "seek_hole": false, 00:08:30.691 "seek_data": false, 00:08:30.691 "copy": true, 00:08:30.691 "nvme_iov_md": false 00:08:30.691 }, 00:08:30.691 "memory_domains": [ 00:08:30.691 { 00:08:30.691 "dma_device_id": "system", 00:08:30.691 "dma_device_type": 1 00:08:30.691 }, 00:08:30.691 { 00:08:30.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.691 "dma_device_type": 2 00:08:30.691 } 00:08:30.691 ], 00:08:30.691 "driver_specific": {} 00:08:30.691 } 00:08:30.691 ] 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.691 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.951 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.951 "name": "Existed_Raid", 00:08:30.951 "uuid": "1b26e1e1-0f02-4303-9c44-cffb72749ad7", 00:08:30.951 "strip_size_kb": 0, 00:08:30.951 "state": "online", 00:08:30.951 "raid_level": "raid1", 00:08:30.951 "superblock": true, 00:08:30.951 "num_base_bdevs": 2, 00:08:30.951 "num_base_bdevs_discovered": 2, 00:08:30.951 "num_base_bdevs_operational": 2, 00:08:30.951 "base_bdevs_list": [ 00:08:30.951 { 00:08:30.951 "name": "BaseBdev1", 00:08:30.951 "uuid": "4673e91f-f5d6-48ce-bf4d-4eee0ac42da8", 00:08:30.951 "is_configured": true, 00:08:30.951 "data_offset": 2048, 00:08:30.951 "data_size": 63488 00:08:30.951 }, 00:08:30.951 { 00:08:30.951 "name": "BaseBdev2", 00:08:30.951 "uuid": "aba416a7-85cc-4df8-8fec-4e5bd41dad27", 00:08:30.951 "is_configured": true, 00:08:30.951 "data_offset": 2048, 00:08:30.951 "data_size": 63488 00:08:30.951 } 00:08:30.951 ] 00:08:30.951 }' 00:08:30.951 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.951 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.211 [2024-10-21 09:53:07.713016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.211 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.211 "name": "Existed_Raid", 00:08:31.211 "aliases": [ 00:08:31.211 "1b26e1e1-0f02-4303-9c44-cffb72749ad7" 00:08:31.211 ], 00:08:31.211 "product_name": "Raid Volume", 00:08:31.211 "block_size": 512, 00:08:31.211 "num_blocks": 63488, 00:08:31.211 "uuid": "1b26e1e1-0f02-4303-9c44-cffb72749ad7", 00:08:31.211 "assigned_rate_limits": { 00:08:31.211 "rw_ios_per_sec": 0, 00:08:31.211 "rw_mbytes_per_sec": 0, 00:08:31.211 "r_mbytes_per_sec": 0, 00:08:31.211 "w_mbytes_per_sec": 0 00:08:31.211 }, 00:08:31.211 "claimed": false, 00:08:31.211 "zoned": false, 00:08:31.211 "supported_io_types": { 00:08:31.211 "read": true, 00:08:31.211 "write": true, 00:08:31.211 "unmap": false, 00:08:31.211 "flush": false, 00:08:31.211 "reset": true, 00:08:31.211 "nvme_admin": false, 00:08:31.211 "nvme_io": false, 00:08:31.211 "nvme_io_md": false, 00:08:31.211 "write_zeroes": true, 00:08:31.211 "zcopy": false, 00:08:31.211 "get_zone_info": false, 00:08:31.211 "zone_management": false, 00:08:31.212 "zone_append": false, 00:08:31.212 "compare": false, 00:08:31.212 "compare_and_write": false, 00:08:31.212 "abort": false, 00:08:31.212 "seek_hole": false, 00:08:31.212 "seek_data": false, 00:08:31.212 "copy": false, 00:08:31.212 "nvme_iov_md": false 00:08:31.212 }, 00:08:31.212 "memory_domains": [ 00:08:31.212 { 00:08:31.212 "dma_device_id": "system", 00:08:31.212 "dma_device_type": 1 00:08:31.212 }, 00:08:31.212 { 00:08:31.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.212 "dma_device_type": 2 00:08:31.212 }, 00:08:31.212 { 00:08:31.212 "dma_device_id": "system", 00:08:31.212 "dma_device_type": 1 00:08:31.212 }, 00:08:31.212 { 00:08:31.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.212 "dma_device_type": 2 00:08:31.212 } 00:08:31.212 ], 00:08:31.212 "driver_specific": { 00:08:31.212 "raid": { 00:08:31.212 "uuid": "1b26e1e1-0f02-4303-9c44-cffb72749ad7", 00:08:31.212 "strip_size_kb": 0, 00:08:31.212 "state": "online", 00:08:31.212 "raid_level": "raid1", 00:08:31.212 "superblock": true, 00:08:31.212 "num_base_bdevs": 2, 00:08:31.212 "num_base_bdevs_discovered": 2, 00:08:31.212 "num_base_bdevs_operational": 2, 00:08:31.212 "base_bdevs_list": [ 00:08:31.212 { 00:08:31.212 "name": "BaseBdev1", 00:08:31.212 "uuid": "4673e91f-f5d6-48ce-bf4d-4eee0ac42da8", 00:08:31.212 "is_configured": true, 00:08:31.212 "data_offset": 2048, 00:08:31.212 "data_size": 63488 00:08:31.212 }, 00:08:31.212 { 00:08:31.212 "name": "BaseBdev2", 00:08:31.212 "uuid": "aba416a7-85cc-4df8-8fec-4e5bd41dad27", 00:08:31.212 "is_configured": true, 00:08:31.212 "data_offset": 2048, 00:08:31.212 "data_size": 63488 00:08:31.212 } 00:08:31.212 ] 00:08:31.212 } 00:08:31.212 } 00:08:31.212 }' 00:08:31.212 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:31.472 BaseBdev2' 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.472 09:53:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.472 [2024-10-21 09:53:07.940363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.472 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.472 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:31.472 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:31.472 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.472 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.473 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.733 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.733 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.733 "name": "Existed_Raid", 00:08:31.733 "uuid": "1b26e1e1-0f02-4303-9c44-cffb72749ad7", 00:08:31.733 "strip_size_kb": 0, 00:08:31.733 "state": "online", 00:08:31.733 "raid_level": "raid1", 00:08:31.733 "superblock": true, 00:08:31.733 "num_base_bdevs": 2, 00:08:31.733 "num_base_bdevs_discovered": 1, 00:08:31.733 "num_base_bdevs_operational": 1, 00:08:31.733 "base_bdevs_list": [ 00:08:31.733 { 00:08:31.733 "name": null, 00:08:31.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.733 "is_configured": false, 00:08:31.733 "data_offset": 0, 00:08:31.733 "data_size": 63488 00:08:31.733 }, 00:08:31.733 { 00:08:31.733 "name": "BaseBdev2", 00:08:31.733 "uuid": "aba416a7-85cc-4df8-8fec-4e5bd41dad27", 00:08:31.733 "is_configured": true, 00:08:31.733 "data_offset": 2048, 00:08:31.733 "data_size": 63488 00:08:31.733 } 00:08:31.733 ] 00:08:31.733 }' 00:08:31.733 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.733 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.993 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.993 [2024-10-21 09:53:08.536524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.993 [2024-10-21 09:53:08.536756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.254 [2024-10-21 09:53:08.634376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.254 [2024-10-21 09:53:08.634558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.254 [2024-10-21 09:53:08.634624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:08:32.254 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.254 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:32.254 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62530 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62530 ']' 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62530 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62530 00:08:32.255 killing process with pid 62530 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62530' 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62530 00:08:32.255 [2024-10-21 09:53:08.726845] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.255 09:53:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62530 00:08:32.255 [2024-10-21 09:53:08.745468] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.644 ************************************ 00:08:33.644 END TEST raid_state_function_test_sb 00:08:33.644 ************************************ 00:08:33.644 09:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:33.644 00:08:33.644 real 0m5.149s 00:08:33.644 user 0m7.239s 00:08:33.644 sys 0m0.926s 00:08:33.644 09:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.644 09:53:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.644 09:53:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:33.644 09:53:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:33.644 09:53:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.644 09:53:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.644 ************************************ 00:08:33.644 START TEST raid_superblock_test 00:08:33.644 ************************************ 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62777 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62777 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62777 ']' 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.644 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.644 [2024-10-21 09:53:10.124616] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:33.644 [2024-10-21 09:53:10.124786] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62777 ] 00:08:33.904 [2024-10-21 09:53:10.289436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.904 [2024-10-21 09:53:10.425385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.163 [2024-10-21 09:53:10.676820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.163 [2024-10-21 09:53:10.676991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.422 09:53:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.422 malloc1 00:08:34.422 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.422 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:34.422 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.422 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.422 [2024-10-21 09:53:11.011545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:34.422 [2024-10-21 09:53:11.011650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.422 [2024-10-21 09:53:11.011681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:08:34.422 [2024-10-21 09:53:11.011692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.422 [2024-10-21 09:53:11.014211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.422 [2024-10-21 09:53:11.014251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:34.682 pt1 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.682 malloc2 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.682 [2024-10-21 09:53:11.077769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.682 [2024-10-21 09:53:11.077909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.682 [2024-10-21 09:53:11.077952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:08:34.682 [2024-10-21 09:53:11.077979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.682 [2024-10-21 09:53:11.080266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.682 [2024-10-21 09:53:11.080337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.682 pt2 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.682 [2024-10-21 09:53:11.089811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:34.682 [2024-10-21 09:53:11.091786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:34.682 [2024-10-21 09:53:11.091991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:08:34.682 [2024-10-21 09:53:11.092037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:34.682 [2024-10-21 09:53:11.092293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:08:34.682 [2024-10-21 09:53:11.092508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:08:34.682 [2024-10-21 09:53:11.092555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:08:34.682 [2024-10-21 09:53:11.092749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.682 "name": "raid_bdev1", 00:08:34.682 "uuid": "35ce87be-13be-423d-8b51-5bf643450df6", 00:08:34.682 "strip_size_kb": 0, 00:08:34.682 "state": "online", 00:08:34.682 "raid_level": "raid1", 00:08:34.682 "superblock": true, 00:08:34.682 "num_base_bdevs": 2, 00:08:34.682 "num_base_bdevs_discovered": 2, 00:08:34.682 "num_base_bdevs_operational": 2, 00:08:34.682 "base_bdevs_list": [ 00:08:34.682 { 00:08:34.682 "name": "pt1", 00:08:34.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.682 "is_configured": true, 00:08:34.682 "data_offset": 2048, 00:08:34.682 "data_size": 63488 00:08:34.682 }, 00:08:34.682 { 00:08:34.682 "name": "pt2", 00:08:34.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.682 "is_configured": true, 00:08:34.682 "data_offset": 2048, 00:08:34.682 "data_size": 63488 00:08:34.682 } 00:08:34.682 ] 00:08:34.682 }' 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.682 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.251 [2024-10-21 09:53:11.565318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.251 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.251 "name": "raid_bdev1", 00:08:35.251 "aliases": [ 00:08:35.251 "35ce87be-13be-423d-8b51-5bf643450df6" 00:08:35.251 ], 00:08:35.251 "product_name": "Raid Volume", 00:08:35.251 "block_size": 512, 00:08:35.251 "num_blocks": 63488, 00:08:35.251 "uuid": "35ce87be-13be-423d-8b51-5bf643450df6", 00:08:35.251 "assigned_rate_limits": { 00:08:35.251 "rw_ios_per_sec": 0, 00:08:35.252 "rw_mbytes_per_sec": 0, 00:08:35.252 "r_mbytes_per_sec": 0, 00:08:35.252 "w_mbytes_per_sec": 0 00:08:35.252 }, 00:08:35.252 "claimed": false, 00:08:35.252 "zoned": false, 00:08:35.252 "supported_io_types": { 00:08:35.252 "read": true, 00:08:35.252 "write": true, 00:08:35.252 "unmap": false, 00:08:35.252 "flush": false, 00:08:35.252 "reset": true, 00:08:35.252 "nvme_admin": false, 00:08:35.252 "nvme_io": false, 00:08:35.252 "nvme_io_md": false, 00:08:35.252 "write_zeroes": true, 00:08:35.252 "zcopy": false, 00:08:35.252 "get_zone_info": false, 00:08:35.252 "zone_management": false, 00:08:35.252 "zone_append": false, 00:08:35.252 "compare": false, 00:08:35.252 "compare_and_write": false, 00:08:35.252 "abort": false, 00:08:35.252 "seek_hole": false, 00:08:35.252 "seek_data": false, 00:08:35.252 "copy": false, 00:08:35.252 "nvme_iov_md": false 00:08:35.252 }, 00:08:35.252 "memory_domains": [ 00:08:35.252 { 00:08:35.252 "dma_device_id": "system", 00:08:35.252 "dma_device_type": 1 00:08:35.252 }, 00:08:35.252 { 00:08:35.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.252 "dma_device_type": 2 00:08:35.252 }, 00:08:35.252 { 00:08:35.252 "dma_device_id": "system", 00:08:35.252 "dma_device_type": 1 00:08:35.252 }, 00:08:35.252 { 00:08:35.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.252 "dma_device_type": 2 00:08:35.252 } 00:08:35.252 ], 00:08:35.252 "driver_specific": { 00:08:35.252 "raid": { 00:08:35.252 "uuid": "35ce87be-13be-423d-8b51-5bf643450df6", 00:08:35.252 "strip_size_kb": 0, 00:08:35.252 "state": "online", 00:08:35.252 "raid_level": "raid1", 00:08:35.252 "superblock": true, 00:08:35.252 "num_base_bdevs": 2, 00:08:35.252 "num_base_bdevs_discovered": 2, 00:08:35.252 "num_base_bdevs_operational": 2, 00:08:35.252 "base_bdevs_list": [ 00:08:35.252 { 00:08:35.252 "name": "pt1", 00:08:35.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.252 "is_configured": true, 00:08:35.252 "data_offset": 2048, 00:08:35.252 "data_size": 63488 00:08:35.252 }, 00:08:35.252 { 00:08:35.252 "name": "pt2", 00:08:35.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.252 "is_configured": true, 00:08:35.252 "data_offset": 2048, 00:08:35.252 "data_size": 63488 00:08:35.252 } 00:08:35.252 ] 00:08:35.252 } 00:08:35.252 } 00:08:35.252 }' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:35.252 pt2' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:35.252 [2024-10-21 09:53:11.784896] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=35ce87be-13be-423d-8b51-5bf643450df6 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 35ce87be-13be-423d-8b51-5bf643450df6 ']' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.252 [2024-10-21 09:53:11.824671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.252 [2024-10-21 09:53:11.824715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.252 [2024-10-21 09:53:11.824843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.252 [2024-10-21 09:53:11.824909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.252 [2024-10-21 09:53:11.824922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.252 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.510 [2024-10-21 09:53:11.960456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:35.510 [2024-10-21 09:53:11.962727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:35.510 [2024-10-21 09:53:11.962815] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:35.510 [2024-10-21 09:53:11.962880] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:35.510 [2024-10-21 09:53:11.962895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.510 [2024-10-21 09:53:11.962909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:08:35.510 request: 00:08:35.510 { 00:08:35.510 "name": "raid_bdev1", 00:08:35.510 "raid_level": "raid1", 00:08:35.510 "base_bdevs": [ 00:08:35.510 "malloc1", 00:08:35.510 "malloc2" 00:08:35.510 ], 00:08:35.510 "superblock": false, 00:08:35.510 "method": "bdev_raid_create", 00:08:35.510 "req_id": 1 00:08:35.510 } 00:08:35.510 Got JSON-RPC error response 00:08:35.510 response: 00:08:35.510 { 00:08:35.510 "code": -17, 00:08:35.510 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:35.510 } 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.510 09:53:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.510 [2024-10-21 09:53:12.024307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:35.510 [2024-10-21 09:53:12.024496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.510 [2024-10-21 09:53:12.024534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:35.510 [2024-10-21 09:53:12.024582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.510 [2024-10-21 09:53:12.027118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.510 [2024-10-21 09:53:12.027196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:35.510 [2024-10-21 09:53:12.027326] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:35.510 [2024-10-21 09:53:12.027422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:35.510 pt1 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.510 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.510 "name": "raid_bdev1", 00:08:35.510 "uuid": "35ce87be-13be-423d-8b51-5bf643450df6", 00:08:35.510 "strip_size_kb": 0, 00:08:35.510 "state": "configuring", 00:08:35.510 "raid_level": "raid1", 00:08:35.510 "superblock": true, 00:08:35.510 "num_base_bdevs": 2, 00:08:35.510 "num_base_bdevs_discovered": 1, 00:08:35.510 "num_base_bdevs_operational": 2, 00:08:35.511 "base_bdevs_list": [ 00:08:35.511 { 00:08:35.511 "name": "pt1", 00:08:35.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.511 "is_configured": true, 00:08:35.511 "data_offset": 2048, 00:08:35.511 "data_size": 63488 00:08:35.511 }, 00:08:35.511 { 00:08:35.511 "name": null, 00:08:35.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.511 "is_configured": false, 00:08:35.511 "data_offset": 2048, 00:08:35.511 "data_size": 63488 00:08:35.511 } 00:08:35.511 ] 00:08:35.511 }' 00:08:35.511 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.511 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.078 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:36.078 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:36.078 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:36.078 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:36.078 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.078 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.079 [2024-10-21 09:53:12.435622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:36.079 [2024-10-21 09:53:12.435808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.079 [2024-10-21 09:53:12.435836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:08:36.079 [2024-10-21 09:53:12.435849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.079 [2024-10-21 09:53:12.436424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.079 [2024-10-21 09:53:12.436449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:36.079 [2024-10-21 09:53:12.436548] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:36.079 [2024-10-21 09:53:12.436598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:36.079 [2024-10-21 09:53:12.436729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:36.079 [2024-10-21 09:53:12.436748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:36.079 [2024-10-21 09:53:12.437002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:36.079 [2024-10-21 09:53:12.437155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:36.079 [2024-10-21 09:53:12.437172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:36.079 [2024-10-21 09:53:12.437319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.079 pt2 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.079 "name": "raid_bdev1", 00:08:36.079 "uuid": "35ce87be-13be-423d-8b51-5bf643450df6", 00:08:36.079 "strip_size_kb": 0, 00:08:36.079 "state": "online", 00:08:36.079 "raid_level": "raid1", 00:08:36.079 "superblock": true, 00:08:36.079 "num_base_bdevs": 2, 00:08:36.079 "num_base_bdevs_discovered": 2, 00:08:36.079 "num_base_bdevs_operational": 2, 00:08:36.079 "base_bdevs_list": [ 00:08:36.079 { 00:08:36.079 "name": "pt1", 00:08:36.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.079 "is_configured": true, 00:08:36.079 "data_offset": 2048, 00:08:36.079 "data_size": 63488 00:08:36.079 }, 00:08:36.079 { 00:08:36.079 "name": "pt2", 00:08:36.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.079 "is_configured": true, 00:08:36.079 "data_offset": 2048, 00:08:36.079 "data_size": 63488 00:08:36.079 } 00:08:36.079 ] 00:08:36.079 }' 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.079 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.338 [2024-10-21 09:53:12.859156] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.338 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.338 "name": "raid_bdev1", 00:08:36.338 "aliases": [ 00:08:36.338 "35ce87be-13be-423d-8b51-5bf643450df6" 00:08:36.338 ], 00:08:36.338 "product_name": "Raid Volume", 00:08:36.338 "block_size": 512, 00:08:36.338 "num_blocks": 63488, 00:08:36.338 "uuid": "35ce87be-13be-423d-8b51-5bf643450df6", 00:08:36.338 "assigned_rate_limits": { 00:08:36.338 "rw_ios_per_sec": 0, 00:08:36.338 "rw_mbytes_per_sec": 0, 00:08:36.338 "r_mbytes_per_sec": 0, 00:08:36.338 "w_mbytes_per_sec": 0 00:08:36.338 }, 00:08:36.338 "claimed": false, 00:08:36.338 "zoned": false, 00:08:36.338 "supported_io_types": { 00:08:36.338 "read": true, 00:08:36.338 "write": true, 00:08:36.338 "unmap": false, 00:08:36.338 "flush": false, 00:08:36.338 "reset": true, 00:08:36.338 "nvme_admin": false, 00:08:36.338 "nvme_io": false, 00:08:36.339 "nvme_io_md": false, 00:08:36.339 "write_zeroes": true, 00:08:36.339 "zcopy": false, 00:08:36.339 "get_zone_info": false, 00:08:36.339 "zone_management": false, 00:08:36.339 "zone_append": false, 00:08:36.339 "compare": false, 00:08:36.339 "compare_and_write": false, 00:08:36.339 "abort": false, 00:08:36.339 "seek_hole": false, 00:08:36.339 "seek_data": false, 00:08:36.339 "copy": false, 00:08:36.339 "nvme_iov_md": false 00:08:36.339 }, 00:08:36.339 "memory_domains": [ 00:08:36.339 { 00:08:36.339 "dma_device_id": "system", 00:08:36.339 "dma_device_type": 1 00:08:36.339 }, 00:08:36.339 { 00:08:36.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.339 "dma_device_type": 2 00:08:36.339 }, 00:08:36.339 { 00:08:36.339 "dma_device_id": "system", 00:08:36.339 "dma_device_type": 1 00:08:36.339 }, 00:08:36.339 { 00:08:36.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.339 "dma_device_type": 2 00:08:36.339 } 00:08:36.339 ], 00:08:36.339 "driver_specific": { 00:08:36.339 "raid": { 00:08:36.339 "uuid": "35ce87be-13be-423d-8b51-5bf643450df6", 00:08:36.339 "strip_size_kb": 0, 00:08:36.339 "state": "online", 00:08:36.339 "raid_level": "raid1", 00:08:36.339 "superblock": true, 00:08:36.339 "num_base_bdevs": 2, 00:08:36.339 "num_base_bdevs_discovered": 2, 00:08:36.339 "num_base_bdevs_operational": 2, 00:08:36.339 "base_bdevs_list": [ 00:08:36.339 { 00:08:36.339 "name": "pt1", 00:08:36.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.339 "is_configured": true, 00:08:36.339 "data_offset": 2048, 00:08:36.339 "data_size": 63488 00:08:36.339 }, 00:08:36.339 { 00:08:36.339 "name": "pt2", 00:08:36.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.339 "is_configured": true, 00:08:36.339 "data_offset": 2048, 00:08:36.339 "data_size": 63488 00:08:36.339 } 00:08:36.339 ] 00:08:36.339 } 00:08:36.339 } 00:08:36.339 }' 00:08:36.339 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.599 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:36.599 pt2' 00:08:36.599 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.599 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.599 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.599 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:36.599 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.599 09:53:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.599 09:53:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.599 [2024-10-21 09:53:13.102757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 35ce87be-13be-423d-8b51-5bf643450df6 '!=' 35ce87be-13be-423d-8b51-5bf643450df6 ']' 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.599 [2024-10-21 09:53:13.134527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.599 "name": "raid_bdev1", 00:08:36.599 "uuid": "35ce87be-13be-423d-8b51-5bf643450df6", 00:08:36.599 "strip_size_kb": 0, 00:08:36.599 "state": "online", 00:08:36.599 "raid_level": "raid1", 00:08:36.599 "superblock": true, 00:08:36.599 "num_base_bdevs": 2, 00:08:36.599 "num_base_bdevs_discovered": 1, 00:08:36.599 "num_base_bdevs_operational": 1, 00:08:36.599 "base_bdevs_list": [ 00:08:36.599 { 00:08:36.599 "name": null, 00:08:36.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.599 "is_configured": false, 00:08:36.599 "data_offset": 0, 00:08:36.599 "data_size": 63488 00:08:36.599 }, 00:08:36.599 { 00:08:36.599 "name": "pt2", 00:08:36.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.599 "is_configured": true, 00:08:36.599 "data_offset": 2048, 00:08:36.599 "data_size": 63488 00:08:36.599 } 00:08:36.599 ] 00:08:36.599 }' 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.599 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.170 [2024-10-21 09:53:13.605672] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:37.170 [2024-10-21 09:53:13.605819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.170 [2024-10-21 09:53:13.605938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.170 [2024-10-21 09:53:13.606008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.170 [2024-10-21 09:53:13.606068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.170 [2024-10-21 09:53:13.677527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:37.170 [2024-10-21 09:53:13.677737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.170 [2024-10-21 09:53:13.677779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:37.170 [2024-10-21 09:53:13.677809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.170 [2024-10-21 09:53:13.680329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.170 [2024-10-21 09:53:13.680405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:37.170 [2024-10-21 09:53:13.680530] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:37.170 [2024-10-21 09:53:13.680618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:37.170 [2024-10-21 09:53:13.680780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:08:37.170 [2024-10-21 09:53:13.680819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:37.170 [2024-10-21 09:53:13.681061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:37.170 [2024-10-21 09:53:13.681259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:08:37.170 [2024-10-21 09:53:13.681299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:08:37.170 [2024-10-21 09:53:13.681523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.170 pt2 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.170 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.171 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.171 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.171 "name": "raid_bdev1", 00:08:37.171 "uuid": "35ce87be-13be-423d-8b51-5bf643450df6", 00:08:37.171 "strip_size_kb": 0, 00:08:37.171 "state": "online", 00:08:37.171 "raid_level": "raid1", 00:08:37.171 "superblock": true, 00:08:37.171 "num_base_bdevs": 2, 00:08:37.171 "num_base_bdevs_discovered": 1, 00:08:37.171 "num_base_bdevs_operational": 1, 00:08:37.171 "base_bdevs_list": [ 00:08:37.171 { 00:08:37.171 "name": null, 00:08:37.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.171 "is_configured": false, 00:08:37.171 "data_offset": 2048, 00:08:37.171 "data_size": 63488 00:08:37.171 }, 00:08:37.171 { 00:08:37.171 "name": "pt2", 00:08:37.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.171 "is_configured": true, 00:08:37.171 "data_offset": 2048, 00:08:37.171 "data_size": 63488 00:08:37.171 } 00:08:37.171 ] 00:08:37.171 }' 00:08:37.171 09:53:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.171 09:53:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.740 [2024-10-21 09:53:14.112732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:37.740 [2024-10-21 09:53:14.112780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.740 [2024-10-21 09:53:14.112869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.740 [2024-10-21 09:53:14.112927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.740 [2024-10-21 09:53:14.112937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.740 [2024-10-21 09:53:14.172709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:37.740 [2024-10-21 09:53:14.172804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.740 [2024-10-21 09:53:14.172832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:37.740 [2024-10-21 09:53:14.172842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.740 [2024-10-21 09:53:14.175357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.740 [2024-10-21 09:53:14.175475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:37.740 [2024-10-21 09:53:14.175602] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:37.740 [2024-10-21 09:53:14.175659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:37.740 [2024-10-21 09:53:14.175795] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:37.740 [2024-10-21 09:53:14.175805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:37.740 [2024-10-21 09:53:14.175821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state configuring 00:08:37.740 [2024-10-21 09:53:14.175885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:37.740 [2024-10-21 09:53:14.175969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:37.740 [2024-10-21 09:53:14.175976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:37.740 [2024-10-21 09:53:14.176206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:37.740 [2024-10-21 09:53:14.176343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:37.740 [2024-10-21 09:53:14.176356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:37.740 [2024-10-21 09:53:14.176490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.740 pt1 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.740 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.740 "name": "raid_bdev1", 00:08:37.740 "uuid": "35ce87be-13be-423d-8b51-5bf643450df6", 00:08:37.740 "strip_size_kb": 0, 00:08:37.740 "state": "online", 00:08:37.740 "raid_level": "raid1", 00:08:37.741 "superblock": true, 00:08:37.741 "num_base_bdevs": 2, 00:08:37.741 "num_base_bdevs_discovered": 1, 00:08:37.741 "num_base_bdevs_operational": 1, 00:08:37.741 "base_bdevs_list": [ 00:08:37.741 { 00:08:37.741 "name": null, 00:08:37.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.741 "is_configured": false, 00:08:37.741 "data_offset": 2048, 00:08:37.741 "data_size": 63488 00:08:37.741 }, 00:08:37.741 { 00:08:37.741 "name": "pt2", 00:08:37.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.741 "is_configured": true, 00:08:37.741 "data_offset": 2048, 00:08:37.741 "data_size": 63488 00:08:37.741 } 00:08:37.741 ] 00:08:37.741 }' 00:08:37.741 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.741 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.000 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:38.000 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:38.000 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.000 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.261 [2024-10-21 09:53:14.644121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 35ce87be-13be-423d-8b51-5bf643450df6 '!=' 35ce87be-13be-423d-8b51-5bf643450df6 ']' 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62777 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62777 ']' 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62777 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62777 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62777' 00:08:38.261 killing process with pid 62777 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62777 00:08:38.261 [2024-10-21 09:53:14.708737] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.261 [2024-10-21 09:53:14.708894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.261 09:53:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62777 00:08:38.261 [2024-10-21 09:53:14.708972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.261 [2024-10-21 09:53:14.708990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:38.521 [2024-10-21 09:53:14.926714] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.904 09:53:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:39.904 00:08:39.904 real 0m6.094s 00:08:39.904 user 0m9.081s 00:08:39.904 sys 0m1.083s 00:08:39.904 09:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.904 ************************************ 00:08:39.904 END TEST raid_superblock_test 00:08:39.904 ************************************ 00:08:39.904 09:53:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.904 09:53:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:39.904 09:53:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:39.904 09:53:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.904 09:53:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.904 ************************************ 00:08:39.904 START TEST raid_read_error_test 00:08:39.904 ************************************ 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EWpIzOfWSL 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63107 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63107 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63107 ']' 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.904 09:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.905 09:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.905 09:53:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.905 [2024-10-21 09:53:16.298277] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:39.905 [2024-10-21 09:53:16.298401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63107 ] 00:08:39.905 [2024-10-21 09:53:16.462725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.164 [2024-10-21 09:53:16.607513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.425 [2024-10-21 09:53:16.865116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.425 [2024-10-21 09:53:16.865309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 BaseBdev1_malloc 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 true 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 [2024-10-21 09:53:17.183963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:40.685 [2024-10-21 09:53:17.184033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.685 [2024-10-21 09:53:17.184050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:40.685 [2024-10-21 09:53:17.184065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.685 [2024-10-21 09:53:17.186350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.685 [2024-10-21 09:53:17.186474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:40.685 BaseBdev1 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 BaseBdev2_malloc 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 true 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.685 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.685 [2024-10-21 09:53:17.256126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:40.686 [2024-10-21 09:53:17.256210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.686 [2024-10-21 09:53:17.256231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:40.686 [2024-10-21 09:53:17.256243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.686 [2024-10-21 09:53:17.258747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.686 [2024-10-21 09:53:17.258786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:40.686 BaseBdev2 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.686 [2024-10-21 09:53:17.268176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.686 [2024-10-21 09:53:17.270366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.686 [2024-10-21 09:53:17.270642] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:40.686 [2024-10-21 09:53:17.270661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:40.686 [2024-10-21 09:53:17.270947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:08:40.686 [2024-10-21 09:53:17.271168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:40.686 [2024-10-21 09:53:17.271181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:40.686 [2024-10-21 09:53:17.271374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.686 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.945 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.945 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.945 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.945 "name": "raid_bdev1", 00:08:40.945 "uuid": "d2c06055-6aff-41e3-9a65-8067959580d5", 00:08:40.945 "strip_size_kb": 0, 00:08:40.945 "state": "online", 00:08:40.945 "raid_level": "raid1", 00:08:40.945 "superblock": true, 00:08:40.945 "num_base_bdevs": 2, 00:08:40.945 "num_base_bdevs_discovered": 2, 00:08:40.945 "num_base_bdevs_operational": 2, 00:08:40.945 "base_bdevs_list": [ 00:08:40.945 { 00:08:40.945 "name": "BaseBdev1", 00:08:40.945 "uuid": "c05e660d-8c8c-5569-a8cf-72c2dac6c15c", 00:08:40.945 "is_configured": true, 00:08:40.945 "data_offset": 2048, 00:08:40.945 "data_size": 63488 00:08:40.945 }, 00:08:40.945 { 00:08:40.945 "name": "BaseBdev2", 00:08:40.945 "uuid": "0d23666f-ad09-53c7-9e5d-b9278095495b", 00:08:40.945 "is_configured": true, 00:08:40.945 "data_offset": 2048, 00:08:40.945 "data_size": 63488 00:08:40.945 } 00:08:40.945 ] 00:08:40.945 }' 00:08:40.945 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.945 09:53:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.204 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:41.204 09:53:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:41.204 [2024-10-21 09:53:17.788871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.143 09:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.401 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.401 "name": "raid_bdev1", 00:08:42.401 "uuid": "d2c06055-6aff-41e3-9a65-8067959580d5", 00:08:42.401 "strip_size_kb": 0, 00:08:42.401 "state": "online", 00:08:42.401 "raid_level": "raid1", 00:08:42.401 "superblock": true, 00:08:42.401 "num_base_bdevs": 2, 00:08:42.401 "num_base_bdevs_discovered": 2, 00:08:42.401 "num_base_bdevs_operational": 2, 00:08:42.401 "base_bdevs_list": [ 00:08:42.401 { 00:08:42.401 "name": "BaseBdev1", 00:08:42.401 "uuid": "c05e660d-8c8c-5569-a8cf-72c2dac6c15c", 00:08:42.401 "is_configured": true, 00:08:42.401 "data_offset": 2048, 00:08:42.401 "data_size": 63488 00:08:42.401 }, 00:08:42.401 { 00:08:42.401 "name": "BaseBdev2", 00:08:42.401 "uuid": "0d23666f-ad09-53c7-9e5d-b9278095495b", 00:08:42.401 "is_configured": true, 00:08:42.401 "data_offset": 2048, 00:08:42.401 "data_size": 63488 00:08:42.401 } 00:08:42.401 ] 00:08:42.401 }' 00:08:42.401 09:53:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.401 09:53:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.659 09:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.659 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.659 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.659 [2024-10-21 09:53:19.150799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.659 [2024-10-21 09:53:19.150941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.660 [2024-10-21 09:53:19.153585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.660 [2024-10-21 09:53:19.153634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.660 [2024-10-21 09:53:19.153723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.660 [2024-10-21 09:53:19.153740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:42.660 { 00:08:42.660 "results": [ 00:08:42.660 { 00:08:42.660 "job": "raid_bdev1", 00:08:42.660 "core_mask": "0x1", 00:08:42.660 "workload": "randrw", 00:08:42.660 "percentage": 50, 00:08:42.660 "status": "finished", 00:08:42.660 "queue_depth": 1, 00:08:42.660 "io_size": 131072, 00:08:42.660 "runtime": 1.362605, 00:08:42.660 "iops": 14641.073531947997, 00:08:42.660 "mibps": 1830.1341914934997, 00:08:42.660 "io_failed": 0, 00:08:42.660 "io_timeout": 0, 00:08:42.660 "avg_latency_us": 65.85361359731205, 00:08:42.660 "min_latency_us": 21.463755458515283, 00:08:42.660 "max_latency_us": 1330.7528384279476 00:08:42.660 } 00:08:42.660 ], 00:08:42.660 "core_count": 1 00:08:42.660 } 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63107 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63107 ']' 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63107 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63107 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.660 killing process with pid 63107 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63107' 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63107 00:08:42.660 [2024-10-21 09:53:19.198191] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.660 09:53:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63107 00:08:42.919 [2024-10-21 09:53:19.345072] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.298 09:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EWpIzOfWSL 00:08:44.299 09:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:44.299 09:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:44.299 09:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:44.299 09:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:44.299 09:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.299 09:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:44.299 ************************************ 00:08:44.299 END TEST raid_read_error_test 00:08:44.299 ************************************ 00:08:44.299 09:53:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:44.299 00:08:44.299 real 0m4.415s 00:08:44.299 user 0m5.168s 00:08:44.299 sys 0m0.598s 00:08:44.299 09:53:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.299 09:53:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.299 09:53:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:44.299 09:53:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:44.299 09:53:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.299 09:53:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.299 ************************************ 00:08:44.299 START TEST raid_write_error_test 00:08:44.299 ************************************ 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qE1sYo4xjH 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63252 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63252 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63252 ']' 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.299 09:53:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.299 [2024-10-21 09:53:20.780761] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:44.299 [2024-10-21 09:53:20.780894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63252 ] 00:08:44.558 [2024-10-21 09:53:20.942497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.558 [2024-10-21 09:53:21.078510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.817 [2024-10-21 09:53:21.320372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.817 [2024-10-21 09:53:21.320440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.077 BaseBdev1_malloc 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.077 true 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.077 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.077 [2024-10-21 09:53:21.669452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:45.077 [2024-10-21 09:53:21.669520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.077 [2024-10-21 09:53:21.669538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:45.077 [2024-10-21 09:53:21.669553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.077 [2024-10-21 09:53:21.671896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.335 [2024-10-21 09:53:21.672046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:45.336 BaseBdev1 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 BaseBdev2_malloc 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 true 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 [2024-10-21 09:53:21.746680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:45.336 [2024-10-21 09:53:21.746757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.336 [2024-10-21 09:53:21.746777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:45.336 [2024-10-21 09:53:21.746789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.336 [2024-10-21 09:53:21.749254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.336 [2024-10-21 09:53:21.749297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:45.336 BaseBdev2 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 [2024-10-21 09:53:21.758700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.336 [2024-10-21 09:53:21.760811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.336 [2024-10-21 09:53:21.761098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:45.336 [2024-10-21 09:53:21.761120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.336 [2024-10-21 09:53:21.761370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:08:45.336 [2024-10-21 09:53:21.761559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:45.336 [2024-10-21 09:53:21.761585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:45.336 [2024-10-21 09:53:21.761739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.336 "name": "raid_bdev1", 00:08:45.336 "uuid": "debf7a1d-f78a-4725-a7be-f08065b3ef52", 00:08:45.336 "strip_size_kb": 0, 00:08:45.336 "state": "online", 00:08:45.336 "raid_level": "raid1", 00:08:45.336 "superblock": true, 00:08:45.336 "num_base_bdevs": 2, 00:08:45.336 "num_base_bdevs_discovered": 2, 00:08:45.336 "num_base_bdevs_operational": 2, 00:08:45.336 "base_bdevs_list": [ 00:08:45.336 { 00:08:45.336 "name": "BaseBdev1", 00:08:45.336 "uuid": "2bc649b3-e3e6-5a26-8ce9-392a69ba4380", 00:08:45.336 "is_configured": true, 00:08:45.336 "data_offset": 2048, 00:08:45.336 "data_size": 63488 00:08:45.336 }, 00:08:45.336 { 00:08:45.336 "name": "BaseBdev2", 00:08:45.336 "uuid": "7acb16c3-2847-5003-992f-5aa264122620", 00:08:45.336 "is_configured": true, 00:08:45.336 "data_offset": 2048, 00:08:45.336 "data_size": 63488 00:08:45.336 } 00:08:45.336 ] 00:08:45.336 }' 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.336 09:53:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.596 09:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:45.596 09:53:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:45.856 [2024-10-21 09:53:22.271452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.795 [2024-10-21 09:53:23.189286] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:46.795 [2024-10-21 09:53:23.189483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:46.795 [2024-10-21 09:53:23.189727] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.795 "name": "raid_bdev1", 00:08:46.795 "uuid": "debf7a1d-f78a-4725-a7be-f08065b3ef52", 00:08:46.795 "strip_size_kb": 0, 00:08:46.795 "state": "online", 00:08:46.795 "raid_level": "raid1", 00:08:46.795 "superblock": true, 00:08:46.795 "num_base_bdevs": 2, 00:08:46.795 "num_base_bdevs_discovered": 1, 00:08:46.795 "num_base_bdevs_operational": 1, 00:08:46.795 "base_bdevs_list": [ 00:08:46.795 { 00:08:46.795 "name": null, 00:08:46.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.795 "is_configured": false, 00:08:46.795 "data_offset": 0, 00:08:46.795 "data_size": 63488 00:08:46.795 }, 00:08:46.795 { 00:08:46.795 "name": "BaseBdev2", 00:08:46.795 "uuid": "7acb16c3-2847-5003-992f-5aa264122620", 00:08:46.795 "is_configured": true, 00:08:46.795 "data_offset": 2048, 00:08:46.795 "data_size": 63488 00:08:46.795 } 00:08:46.795 ] 00:08:46.795 }' 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.795 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.055 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.055 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.055 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.055 [2024-10-21 09:53:23.642637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.055 [2024-10-21 09:53:23.642779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.055 [2024-10-21 09:53:23.645238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.055 [2024-10-21 09:53:23.645324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.055 [2024-10-21 09:53:23.645404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.055 [2024-10-21 09:53:23.645447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:47.055 { 00:08:47.055 "results": [ 00:08:47.055 { 00:08:47.055 "job": "raid_bdev1", 00:08:47.055 "core_mask": "0x1", 00:08:47.055 "workload": "randrw", 00:08:47.055 "percentage": 50, 00:08:47.055 "status": "finished", 00:08:47.055 "queue_depth": 1, 00:08:47.055 "io_size": 131072, 00:08:47.055 "runtime": 1.371758, 00:08:47.055 "iops": 17866.125074539388, 00:08:47.055 "mibps": 2233.2656343174235, 00:08:47.055 "io_failed": 0, 00:08:47.055 "io_timeout": 0, 00:08:47.055 "avg_latency_us": 53.476566104784965, 00:08:47.055 "min_latency_us": 21.016593886462882, 00:08:47.055 "max_latency_us": 1352.216593886463 00:08:47.055 } 00:08:47.055 ], 00:08:47.055 "core_count": 1 00:08:47.055 } 00:08:47.055 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.055 09:53:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63252 00:08:47.055 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63252 ']' 00:08:47.055 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63252 00:08:47.315 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:47.315 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.315 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63252 00:08:47.315 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.315 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.315 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63252' 00:08:47.315 killing process with pid 63252 00:08:47.315 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63252 00:08:47.315 [2024-10-21 09:53:23.691585] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.315 09:53:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63252 00:08:47.315 [2024-10-21 09:53:23.833496] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.696 09:53:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qE1sYo4xjH 00:08:48.696 09:53:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:48.696 09:53:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:48.696 09:53:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:48.696 09:53:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:48.696 09:53:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.696 09:53:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:48.696 09:53:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:48.696 00:08:48.696 real 0m4.452s 00:08:48.696 user 0m5.195s 00:08:48.696 sys 0m0.626s 00:08:48.696 ************************************ 00:08:48.696 END TEST raid_write_error_test 00:08:48.696 ************************************ 00:08:48.696 09:53:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.696 09:53:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.696 09:53:25 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:48.696 09:53:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:48.696 09:53:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:48.696 09:53:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:48.696 09:53:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.696 09:53:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.696 ************************************ 00:08:48.696 START TEST raid_state_function_test 00:08:48.696 ************************************ 00:08:48.696 09:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:48.696 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:48.696 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63395 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63395' 00:08:48.697 Process raid pid: 63395 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63395 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63395 ']' 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.697 09:53:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.697 [2024-10-21 09:53:25.287930] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:48.697 [2024-10-21 09:53:25.288108] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.956 [2024-10-21 09:53:25.451180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.216 [2024-10-21 09:53:25.592667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.476 [2024-10-21 09:53:25.844314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.476 [2024-10-21 09:53:25.844453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.741 [2024-10-21 09:53:26.164872] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.741 [2024-10-21 09:53:26.165053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.741 [2024-10-21 09:53:26.165082] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.741 [2024-10-21 09:53:26.165104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.741 [2024-10-21 09:53:26.165121] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.741 [2024-10-21 09:53:26.165141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.741 "name": "Existed_Raid", 00:08:49.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.741 "strip_size_kb": 64, 00:08:49.741 "state": "configuring", 00:08:49.741 "raid_level": "raid0", 00:08:49.741 "superblock": false, 00:08:49.741 "num_base_bdevs": 3, 00:08:49.741 "num_base_bdevs_discovered": 0, 00:08:49.741 "num_base_bdevs_operational": 3, 00:08:49.741 "base_bdevs_list": [ 00:08:49.741 { 00:08:49.741 "name": "BaseBdev1", 00:08:49.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.741 "is_configured": false, 00:08:49.741 "data_offset": 0, 00:08:49.741 "data_size": 0 00:08:49.741 }, 00:08:49.741 { 00:08:49.741 "name": "BaseBdev2", 00:08:49.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.741 "is_configured": false, 00:08:49.741 "data_offset": 0, 00:08:49.741 "data_size": 0 00:08:49.741 }, 00:08:49.741 { 00:08:49.741 "name": "BaseBdev3", 00:08:49.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.741 "is_configured": false, 00:08:49.741 "data_offset": 0, 00:08:49.741 "data_size": 0 00:08:49.741 } 00:08:49.741 ] 00:08:49.741 }' 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.741 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.310 [2024-10-21 09:53:26.624034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.310 [2024-10-21 09:53:26.624170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.310 [2024-10-21 09:53:26.636045] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.310 [2024-10-21 09:53:26.636137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.310 [2024-10-21 09:53:26.636165] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.310 [2024-10-21 09:53:26.636188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.310 [2024-10-21 09:53:26.636205] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.310 [2024-10-21 09:53:26.636226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.310 [2024-10-21 09:53:26.692077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.310 BaseBdev1 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.310 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.310 [ 00:08:50.310 { 00:08:50.310 "name": "BaseBdev1", 00:08:50.310 "aliases": [ 00:08:50.310 "14ffef9c-b5fd-48e5-84f5-57837f1160b0" 00:08:50.310 ], 00:08:50.310 "product_name": "Malloc disk", 00:08:50.310 "block_size": 512, 00:08:50.310 "num_blocks": 65536, 00:08:50.310 "uuid": "14ffef9c-b5fd-48e5-84f5-57837f1160b0", 00:08:50.310 "assigned_rate_limits": { 00:08:50.310 "rw_ios_per_sec": 0, 00:08:50.310 "rw_mbytes_per_sec": 0, 00:08:50.310 "r_mbytes_per_sec": 0, 00:08:50.310 "w_mbytes_per_sec": 0 00:08:50.310 }, 00:08:50.310 "claimed": true, 00:08:50.310 "claim_type": "exclusive_write", 00:08:50.310 "zoned": false, 00:08:50.310 "supported_io_types": { 00:08:50.310 "read": true, 00:08:50.310 "write": true, 00:08:50.310 "unmap": true, 00:08:50.310 "flush": true, 00:08:50.310 "reset": true, 00:08:50.310 "nvme_admin": false, 00:08:50.310 "nvme_io": false, 00:08:50.310 "nvme_io_md": false, 00:08:50.310 "write_zeroes": true, 00:08:50.310 "zcopy": true, 00:08:50.310 "get_zone_info": false, 00:08:50.310 "zone_management": false, 00:08:50.310 "zone_append": false, 00:08:50.310 "compare": false, 00:08:50.310 "compare_and_write": false, 00:08:50.310 "abort": true, 00:08:50.310 "seek_hole": false, 00:08:50.310 "seek_data": false, 00:08:50.310 "copy": true, 00:08:50.310 "nvme_iov_md": false 00:08:50.310 }, 00:08:50.310 "memory_domains": [ 00:08:50.310 { 00:08:50.311 "dma_device_id": "system", 00:08:50.311 "dma_device_type": 1 00:08:50.311 }, 00:08:50.311 { 00:08:50.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.311 "dma_device_type": 2 00:08:50.311 } 00:08:50.311 ], 00:08:50.311 "driver_specific": {} 00:08:50.311 } 00:08:50.311 ] 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.311 "name": "Existed_Raid", 00:08:50.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.311 "strip_size_kb": 64, 00:08:50.311 "state": "configuring", 00:08:50.311 "raid_level": "raid0", 00:08:50.311 "superblock": false, 00:08:50.311 "num_base_bdevs": 3, 00:08:50.311 "num_base_bdevs_discovered": 1, 00:08:50.311 "num_base_bdevs_operational": 3, 00:08:50.311 "base_bdevs_list": [ 00:08:50.311 { 00:08:50.311 "name": "BaseBdev1", 00:08:50.311 "uuid": "14ffef9c-b5fd-48e5-84f5-57837f1160b0", 00:08:50.311 "is_configured": true, 00:08:50.311 "data_offset": 0, 00:08:50.311 "data_size": 65536 00:08:50.311 }, 00:08:50.311 { 00:08:50.311 "name": "BaseBdev2", 00:08:50.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.311 "is_configured": false, 00:08:50.311 "data_offset": 0, 00:08:50.311 "data_size": 0 00:08:50.311 }, 00:08:50.311 { 00:08:50.311 "name": "BaseBdev3", 00:08:50.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.311 "is_configured": false, 00:08:50.311 "data_offset": 0, 00:08:50.311 "data_size": 0 00:08:50.311 } 00:08:50.311 ] 00:08:50.311 }' 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.311 09:53:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.570 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.830 [2024-10-21 09:53:27.171358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.830 [2024-10-21 09:53:27.171499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.830 [2024-10-21 09:53:27.179376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.830 [2024-10-21 09:53:27.181655] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.830 [2024-10-21 09:53:27.181735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.830 [2024-10-21 09:53:27.181770] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.830 [2024-10-21 09:53:27.181795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.830 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.830 "name": "Existed_Raid", 00:08:50.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.830 "strip_size_kb": 64, 00:08:50.830 "state": "configuring", 00:08:50.830 "raid_level": "raid0", 00:08:50.830 "superblock": false, 00:08:50.831 "num_base_bdevs": 3, 00:08:50.831 "num_base_bdevs_discovered": 1, 00:08:50.831 "num_base_bdevs_operational": 3, 00:08:50.831 "base_bdevs_list": [ 00:08:50.831 { 00:08:50.831 "name": "BaseBdev1", 00:08:50.831 "uuid": "14ffef9c-b5fd-48e5-84f5-57837f1160b0", 00:08:50.831 "is_configured": true, 00:08:50.831 "data_offset": 0, 00:08:50.831 "data_size": 65536 00:08:50.831 }, 00:08:50.831 { 00:08:50.831 "name": "BaseBdev2", 00:08:50.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.831 "is_configured": false, 00:08:50.831 "data_offset": 0, 00:08:50.831 "data_size": 0 00:08:50.831 }, 00:08:50.831 { 00:08:50.831 "name": "BaseBdev3", 00:08:50.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.831 "is_configured": false, 00:08:50.831 "data_offset": 0, 00:08:50.831 "data_size": 0 00:08:50.831 } 00:08:50.831 ] 00:08:50.831 }' 00:08:50.831 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.831 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.090 [2024-10-21 09:53:27.677996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.090 BaseBdev2 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.090 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.349 [ 00:08:51.349 { 00:08:51.349 "name": "BaseBdev2", 00:08:51.349 "aliases": [ 00:08:51.349 "3e852184-3004-44cc-b296-131c748ec7f5" 00:08:51.349 ], 00:08:51.349 "product_name": "Malloc disk", 00:08:51.349 "block_size": 512, 00:08:51.349 "num_blocks": 65536, 00:08:51.349 "uuid": "3e852184-3004-44cc-b296-131c748ec7f5", 00:08:51.349 "assigned_rate_limits": { 00:08:51.349 "rw_ios_per_sec": 0, 00:08:51.349 "rw_mbytes_per_sec": 0, 00:08:51.349 "r_mbytes_per_sec": 0, 00:08:51.349 "w_mbytes_per_sec": 0 00:08:51.349 }, 00:08:51.349 "claimed": true, 00:08:51.349 "claim_type": "exclusive_write", 00:08:51.349 "zoned": false, 00:08:51.349 "supported_io_types": { 00:08:51.349 "read": true, 00:08:51.349 "write": true, 00:08:51.349 "unmap": true, 00:08:51.349 "flush": true, 00:08:51.349 "reset": true, 00:08:51.349 "nvme_admin": false, 00:08:51.349 "nvme_io": false, 00:08:51.349 "nvme_io_md": false, 00:08:51.349 "write_zeroes": true, 00:08:51.349 "zcopy": true, 00:08:51.349 "get_zone_info": false, 00:08:51.349 "zone_management": false, 00:08:51.349 "zone_append": false, 00:08:51.349 "compare": false, 00:08:51.349 "compare_and_write": false, 00:08:51.349 "abort": true, 00:08:51.349 "seek_hole": false, 00:08:51.349 "seek_data": false, 00:08:51.349 "copy": true, 00:08:51.349 "nvme_iov_md": false 00:08:51.349 }, 00:08:51.349 "memory_domains": [ 00:08:51.349 { 00:08:51.349 "dma_device_id": "system", 00:08:51.349 "dma_device_type": 1 00:08:51.349 }, 00:08:51.349 { 00:08:51.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.349 "dma_device_type": 2 00:08:51.349 } 00:08:51.349 ], 00:08:51.349 "driver_specific": {} 00:08:51.349 } 00:08:51.349 ] 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.349 "name": "Existed_Raid", 00:08:51.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.349 "strip_size_kb": 64, 00:08:51.349 "state": "configuring", 00:08:51.349 "raid_level": "raid0", 00:08:51.349 "superblock": false, 00:08:51.349 "num_base_bdevs": 3, 00:08:51.349 "num_base_bdevs_discovered": 2, 00:08:51.349 "num_base_bdevs_operational": 3, 00:08:51.349 "base_bdevs_list": [ 00:08:51.349 { 00:08:51.349 "name": "BaseBdev1", 00:08:51.349 "uuid": "14ffef9c-b5fd-48e5-84f5-57837f1160b0", 00:08:51.349 "is_configured": true, 00:08:51.349 "data_offset": 0, 00:08:51.349 "data_size": 65536 00:08:51.349 }, 00:08:51.349 { 00:08:51.349 "name": "BaseBdev2", 00:08:51.349 "uuid": "3e852184-3004-44cc-b296-131c748ec7f5", 00:08:51.349 "is_configured": true, 00:08:51.349 "data_offset": 0, 00:08:51.349 "data_size": 65536 00:08:51.349 }, 00:08:51.349 { 00:08:51.349 "name": "BaseBdev3", 00:08:51.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.349 "is_configured": false, 00:08:51.349 "data_offset": 0, 00:08:51.349 "data_size": 0 00:08:51.349 } 00:08:51.349 ] 00:08:51.349 }' 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.349 09:53:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.608 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.608 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.608 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.869 [2024-10-21 09:53:28.232382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.869 [2024-10-21 09:53:28.232518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:51.869 [2024-10-21 09:53:28.232543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:51.869 [2024-10-21 09:53:28.232859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:51.869 [2024-10-21 09:53:28.233060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:51.869 [2024-10-21 09:53:28.233071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:08:51.869 [2024-10-21 09:53:28.233355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.869 BaseBdev3 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.869 [ 00:08:51.869 { 00:08:51.869 "name": "BaseBdev3", 00:08:51.869 "aliases": [ 00:08:51.869 "bb15aac5-65f9-46a7-baec-6d559b7ebbab" 00:08:51.869 ], 00:08:51.869 "product_name": "Malloc disk", 00:08:51.869 "block_size": 512, 00:08:51.869 "num_blocks": 65536, 00:08:51.869 "uuid": "bb15aac5-65f9-46a7-baec-6d559b7ebbab", 00:08:51.869 "assigned_rate_limits": { 00:08:51.869 "rw_ios_per_sec": 0, 00:08:51.869 "rw_mbytes_per_sec": 0, 00:08:51.869 "r_mbytes_per_sec": 0, 00:08:51.869 "w_mbytes_per_sec": 0 00:08:51.869 }, 00:08:51.869 "claimed": true, 00:08:51.869 "claim_type": "exclusive_write", 00:08:51.869 "zoned": false, 00:08:51.869 "supported_io_types": { 00:08:51.869 "read": true, 00:08:51.869 "write": true, 00:08:51.869 "unmap": true, 00:08:51.869 "flush": true, 00:08:51.869 "reset": true, 00:08:51.869 "nvme_admin": false, 00:08:51.869 "nvme_io": false, 00:08:51.869 "nvme_io_md": false, 00:08:51.869 "write_zeroes": true, 00:08:51.869 "zcopy": true, 00:08:51.869 "get_zone_info": false, 00:08:51.869 "zone_management": false, 00:08:51.869 "zone_append": false, 00:08:51.869 "compare": false, 00:08:51.869 "compare_and_write": false, 00:08:51.869 "abort": true, 00:08:51.869 "seek_hole": false, 00:08:51.869 "seek_data": false, 00:08:51.869 "copy": true, 00:08:51.869 "nvme_iov_md": false 00:08:51.869 }, 00:08:51.869 "memory_domains": [ 00:08:51.869 { 00:08:51.869 "dma_device_id": "system", 00:08:51.869 "dma_device_type": 1 00:08:51.869 }, 00:08:51.869 { 00:08:51.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.869 "dma_device_type": 2 00:08:51.869 } 00:08:51.869 ], 00:08:51.869 "driver_specific": {} 00:08:51.869 } 00:08:51.869 ] 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.869 "name": "Existed_Raid", 00:08:51.869 "uuid": "06e3481f-583d-4bf7-9ab3-19e4c5064003", 00:08:51.869 "strip_size_kb": 64, 00:08:51.869 "state": "online", 00:08:51.869 "raid_level": "raid0", 00:08:51.869 "superblock": false, 00:08:51.869 "num_base_bdevs": 3, 00:08:51.869 "num_base_bdevs_discovered": 3, 00:08:51.869 "num_base_bdevs_operational": 3, 00:08:51.869 "base_bdevs_list": [ 00:08:51.869 { 00:08:51.869 "name": "BaseBdev1", 00:08:51.869 "uuid": "14ffef9c-b5fd-48e5-84f5-57837f1160b0", 00:08:51.869 "is_configured": true, 00:08:51.869 "data_offset": 0, 00:08:51.869 "data_size": 65536 00:08:51.869 }, 00:08:51.869 { 00:08:51.869 "name": "BaseBdev2", 00:08:51.869 "uuid": "3e852184-3004-44cc-b296-131c748ec7f5", 00:08:51.869 "is_configured": true, 00:08:51.869 "data_offset": 0, 00:08:51.869 "data_size": 65536 00:08:51.869 }, 00:08:51.869 { 00:08:51.869 "name": "BaseBdev3", 00:08:51.869 "uuid": "bb15aac5-65f9-46a7-baec-6d559b7ebbab", 00:08:51.869 "is_configured": true, 00:08:51.869 "data_offset": 0, 00:08:51.869 "data_size": 65536 00:08:51.869 } 00:08:51.869 ] 00:08:51.869 }' 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.869 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.129 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:52.129 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:52.129 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.129 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.129 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.129 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.129 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:52.129 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.129 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.129 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.129 [2024-10-21 09:53:28.704040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.388 "name": "Existed_Raid", 00:08:52.388 "aliases": [ 00:08:52.388 "06e3481f-583d-4bf7-9ab3-19e4c5064003" 00:08:52.388 ], 00:08:52.388 "product_name": "Raid Volume", 00:08:52.388 "block_size": 512, 00:08:52.388 "num_blocks": 196608, 00:08:52.388 "uuid": "06e3481f-583d-4bf7-9ab3-19e4c5064003", 00:08:52.388 "assigned_rate_limits": { 00:08:52.388 "rw_ios_per_sec": 0, 00:08:52.388 "rw_mbytes_per_sec": 0, 00:08:52.388 "r_mbytes_per_sec": 0, 00:08:52.388 "w_mbytes_per_sec": 0 00:08:52.388 }, 00:08:52.388 "claimed": false, 00:08:52.388 "zoned": false, 00:08:52.388 "supported_io_types": { 00:08:52.388 "read": true, 00:08:52.388 "write": true, 00:08:52.388 "unmap": true, 00:08:52.388 "flush": true, 00:08:52.388 "reset": true, 00:08:52.388 "nvme_admin": false, 00:08:52.388 "nvme_io": false, 00:08:52.388 "nvme_io_md": false, 00:08:52.388 "write_zeroes": true, 00:08:52.388 "zcopy": false, 00:08:52.388 "get_zone_info": false, 00:08:52.388 "zone_management": false, 00:08:52.388 "zone_append": false, 00:08:52.388 "compare": false, 00:08:52.388 "compare_and_write": false, 00:08:52.388 "abort": false, 00:08:52.388 "seek_hole": false, 00:08:52.388 "seek_data": false, 00:08:52.388 "copy": false, 00:08:52.388 "nvme_iov_md": false 00:08:52.388 }, 00:08:52.388 "memory_domains": [ 00:08:52.388 { 00:08:52.388 "dma_device_id": "system", 00:08:52.388 "dma_device_type": 1 00:08:52.388 }, 00:08:52.388 { 00:08:52.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.388 "dma_device_type": 2 00:08:52.388 }, 00:08:52.388 { 00:08:52.388 "dma_device_id": "system", 00:08:52.388 "dma_device_type": 1 00:08:52.388 }, 00:08:52.388 { 00:08:52.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.388 "dma_device_type": 2 00:08:52.388 }, 00:08:52.388 { 00:08:52.388 "dma_device_id": "system", 00:08:52.388 "dma_device_type": 1 00:08:52.388 }, 00:08:52.388 { 00:08:52.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.388 "dma_device_type": 2 00:08:52.388 } 00:08:52.388 ], 00:08:52.388 "driver_specific": { 00:08:52.388 "raid": { 00:08:52.388 "uuid": "06e3481f-583d-4bf7-9ab3-19e4c5064003", 00:08:52.388 "strip_size_kb": 64, 00:08:52.388 "state": "online", 00:08:52.388 "raid_level": "raid0", 00:08:52.388 "superblock": false, 00:08:52.388 "num_base_bdevs": 3, 00:08:52.388 "num_base_bdevs_discovered": 3, 00:08:52.388 "num_base_bdevs_operational": 3, 00:08:52.388 "base_bdevs_list": [ 00:08:52.388 { 00:08:52.388 "name": "BaseBdev1", 00:08:52.388 "uuid": "14ffef9c-b5fd-48e5-84f5-57837f1160b0", 00:08:52.388 "is_configured": true, 00:08:52.388 "data_offset": 0, 00:08:52.388 "data_size": 65536 00:08:52.388 }, 00:08:52.388 { 00:08:52.388 "name": "BaseBdev2", 00:08:52.388 "uuid": "3e852184-3004-44cc-b296-131c748ec7f5", 00:08:52.388 "is_configured": true, 00:08:52.388 "data_offset": 0, 00:08:52.388 "data_size": 65536 00:08:52.388 }, 00:08:52.388 { 00:08:52.388 "name": "BaseBdev3", 00:08:52.388 "uuid": "bb15aac5-65f9-46a7-baec-6d559b7ebbab", 00:08:52.388 "is_configured": true, 00:08:52.388 "data_offset": 0, 00:08:52.388 "data_size": 65536 00:08:52.388 } 00:08:52.388 ] 00:08:52.388 } 00:08:52.388 } 00:08:52.388 }' 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:52.388 BaseBdev2 00:08:52.388 BaseBdev3' 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.388 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.389 09:53:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.389 [2024-10-21 09:53:28.975314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.389 [2024-10-21 09:53:28.975361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.389 [2024-10-21 09:53:28.975421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.648 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.648 "name": "Existed_Raid", 00:08:52.648 "uuid": "06e3481f-583d-4bf7-9ab3-19e4c5064003", 00:08:52.648 "strip_size_kb": 64, 00:08:52.648 "state": "offline", 00:08:52.648 "raid_level": "raid0", 00:08:52.648 "superblock": false, 00:08:52.648 "num_base_bdevs": 3, 00:08:52.648 "num_base_bdevs_discovered": 2, 00:08:52.648 "num_base_bdevs_operational": 2, 00:08:52.648 "base_bdevs_list": [ 00:08:52.648 { 00:08:52.648 "name": null, 00:08:52.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.649 "is_configured": false, 00:08:52.649 "data_offset": 0, 00:08:52.649 "data_size": 65536 00:08:52.649 }, 00:08:52.649 { 00:08:52.649 "name": "BaseBdev2", 00:08:52.649 "uuid": "3e852184-3004-44cc-b296-131c748ec7f5", 00:08:52.649 "is_configured": true, 00:08:52.649 "data_offset": 0, 00:08:52.649 "data_size": 65536 00:08:52.649 }, 00:08:52.649 { 00:08:52.649 "name": "BaseBdev3", 00:08:52.649 "uuid": "bb15aac5-65f9-46a7-baec-6d559b7ebbab", 00:08:52.649 "is_configured": true, 00:08:52.649 "data_offset": 0, 00:08:52.649 "data_size": 65536 00:08:52.649 } 00:08:52.649 ] 00:08:52.649 }' 00:08:52.649 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.649 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.218 [2024-10-21 09:53:29.568947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.218 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.218 [2024-10-21 09:53:29.729611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:53.218 [2024-10-21 09:53:29.729770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.479 BaseBdev2 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.479 [ 00:08:53.479 { 00:08:53.479 "name": "BaseBdev2", 00:08:53.479 "aliases": [ 00:08:53.479 "4d2ec692-4309-4518-9a7b-241f6d9f24a2" 00:08:53.479 ], 00:08:53.479 "product_name": "Malloc disk", 00:08:53.479 "block_size": 512, 00:08:53.479 "num_blocks": 65536, 00:08:53.479 "uuid": "4d2ec692-4309-4518-9a7b-241f6d9f24a2", 00:08:53.479 "assigned_rate_limits": { 00:08:53.479 "rw_ios_per_sec": 0, 00:08:53.479 "rw_mbytes_per_sec": 0, 00:08:53.479 "r_mbytes_per_sec": 0, 00:08:53.479 "w_mbytes_per_sec": 0 00:08:53.479 }, 00:08:53.479 "claimed": false, 00:08:53.479 "zoned": false, 00:08:53.479 "supported_io_types": { 00:08:53.479 "read": true, 00:08:53.479 "write": true, 00:08:53.479 "unmap": true, 00:08:53.479 "flush": true, 00:08:53.479 "reset": true, 00:08:53.479 "nvme_admin": false, 00:08:53.479 "nvme_io": false, 00:08:53.479 "nvme_io_md": false, 00:08:53.479 "write_zeroes": true, 00:08:53.479 "zcopy": true, 00:08:53.479 "get_zone_info": false, 00:08:53.479 "zone_management": false, 00:08:53.479 "zone_append": false, 00:08:53.479 "compare": false, 00:08:53.479 "compare_and_write": false, 00:08:53.479 "abort": true, 00:08:53.479 "seek_hole": false, 00:08:53.479 "seek_data": false, 00:08:53.479 "copy": true, 00:08:53.479 "nvme_iov_md": false 00:08:53.479 }, 00:08:53.479 "memory_domains": [ 00:08:53.479 { 00:08:53.479 "dma_device_id": "system", 00:08:53.479 "dma_device_type": 1 00:08:53.479 }, 00:08:53.479 { 00:08:53.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.479 "dma_device_type": 2 00:08:53.479 } 00:08:53.479 ], 00:08:53.479 "driver_specific": {} 00:08:53.479 } 00:08:53.479 ] 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.479 09:53:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.479 BaseBdev3 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.479 [ 00:08:53.479 { 00:08:53.479 "name": "BaseBdev3", 00:08:53.479 "aliases": [ 00:08:53.479 "ed007d28-9168-4213-8a81-e49b8c57f0b1" 00:08:53.479 ], 00:08:53.479 "product_name": "Malloc disk", 00:08:53.479 "block_size": 512, 00:08:53.479 "num_blocks": 65536, 00:08:53.479 "uuid": "ed007d28-9168-4213-8a81-e49b8c57f0b1", 00:08:53.479 "assigned_rate_limits": { 00:08:53.479 "rw_ios_per_sec": 0, 00:08:53.479 "rw_mbytes_per_sec": 0, 00:08:53.479 "r_mbytes_per_sec": 0, 00:08:53.479 "w_mbytes_per_sec": 0 00:08:53.479 }, 00:08:53.479 "claimed": false, 00:08:53.479 "zoned": false, 00:08:53.479 "supported_io_types": { 00:08:53.479 "read": true, 00:08:53.479 "write": true, 00:08:53.479 "unmap": true, 00:08:53.479 "flush": true, 00:08:53.479 "reset": true, 00:08:53.479 "nvme_admin": false, 00:08:53.479 "nvme_io": false, 00:08:53.479 "nvme_io_md": false, 00:08:53.479 "write_zeroes": true, 00:08:53.479 "zcopy": true, 00:08:53.479 "get_zone_info": false, 00:08:53.479 "zone_management": false, 00:08:53.479 "zone_append": false, 00:08:53.479 "compare": false, 00:08:53.479 "compare_and_write": false, 00:08:53.479 "abort": true, 00:08:53.479 "seek_hole": false, 00:08:53.479 "seek_data": false, 00:08:53.479 "copy": true, 00:08:53.479 "nvme_iov_md": false 00:08:53.479 }, 00:08:53.479 "memory_domains": [ 00:08:53.479 { 00:08:53.479 "dma_device_id": "system", 00:08:53.479 "dma_device_type": 1 00:08:53.479 }, 00:08:53.479 { 00:08:53.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.479 "dma_device_type": 2 00:08:53.479 } 00:08:53.479 ], 00:08:53.479 "driver_specific": {} 00:08:53.479 } 00:08:53.479 ] 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.479 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.479 [2024-10-21 09:53:30.058166] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.479 [2024-10-21 09:53:30.058293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.479 [2024-10-21 09:53:30.058336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.479 [2024-10-21 09:53:30.060399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.480 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.739 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.739 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.739 "name": "Existed_Raid", 00:08:53.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.739 "strip_size_kb": 64, 00:08:53.739 "state": "configuring", 00:08:53.739 "raid_level": "raid0", 00:08:53.739 "superblock": false, 00:08:53.739 "num_base_bdevs": 3, 00:08:53.739 "num_base_bdevs_discovered": 2, 00:08:53.739 "num_base_bdevs_operational": 3, 00:08:53.739 "base_bdevs_list": [ 00:08:53.739 { 00:08:53.739 "name": "BaseBdev1", 00:08:53.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.739 "is_configured": false, 00:08:53.739 "data_offset": 0, 00:08:53.739 "data_size": 0 00:08:53.739 }, 00:08:53.739 { 00:08:53.739 "name": "BaseBdev2", 00:08:53.739 "uuid": "4d2ec692-4309-4518-9a7b-241f6d9f24a2", 00:08:53.739 "is_configured": true, 00:08:53.739 "data_offset": 0, 00:08:53.739 "data_size": 65536 00:08:53.739 }, 00:08:53.739 { 00:08:53.739 "name": "BaseBdev3", 00:08:53.739 "uuid": "ed007d28-9168-4213-8a81-e49b8c57f0b1", 00:08:53.739 "is_configured": true, 00:08:53.739 "data_offset": 0, 00:08:53.739 "data_size": 65536 00:08:53.739 } 00:08:53.739 ] 00:08:53.739 }' 00:08:53.739 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.739 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.998 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.999 [2024-10-21 09:53:30.465491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.999 "name": "Existed_Raid", 00:08:53.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.999 "strip_size_kb": 64, 00:08:53.999 "state": "configuring", 00:08:53.999 "raid_level": "raid0", 00:08:53.999 "superblock": false, 00:08:53.999 "num_base_bdevs": 3, 00:08:53.999 "num_base_bdevs_discovered": 1, 00:08:53.999 "num_base_bdevs_operational": 3, 00:08:53.999 "base_bdevs_list": [ 00:08:53.999 { 00:08:53.999 "name": "BaseBdev1", 00:08:53.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.999 "is_configured": false, 00:08:53.999 "data_offset": 0, 00:08:53.999 "data_size": 0 00:08:53.999 }, 00:08:53.999 { 00:08:53.999 "name": null, 00:08:53.999 "uuid": "4d2ec692-4309-4518-9a7b-241f6d9f24a2", 00:08:53.999 "is_configured": false, 00:08:53.999 "data_offset": 0, 00:08:53.999 "data_size": 65536 00:08:53.999 }, 00:08:53.999 { 00:08:53.999 "name": "BaseBdev3", 00:08:53.999 "uuid": "ed007d28-9168-4213-8a81-e49b8c57f0b1", 00:08:53.999 "is_configured": true, 00:08:53.999 "data_offset": 0, 00:08:53.999 "data_size": 65536 00:08:53.999 } 00:08:53.999 ] 00:08:53.999 }' 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.999 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.568 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.568 09:53:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:54.568 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.568 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.568 09:53:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.568 [2024-10-21 09:53:31.060674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.568 BaseBdev1 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.568 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.568 [ 00:08:54.568 { 00:08:54.568 "name": "BaseBdev1", 00:08:54.568 "aliases": [ 00:08:54.568 "4f34eef1-2c55-4003-9d0f-b03afffdf503" 00:08:54.568 ], 00:08:54.568 "product_name": "Malloc disk", 00:08:54.568 "block_size": 512, 00:08:54.568 "num_blocks": 65536, 00:08:54.568 "uuid": "4f34eef1-2c55-4003-9d0f-b03afffdf503", 00:08:54.569 "assigned_rate_limits": { 00:08:54.569 "rw_ios_per_sec": 0, 00:08:54.569 "rw_mbytes_per_sec": 0, 00:08:54.569 "r_mbytes_per_sec": 0, 00:08:54.569 "w_mbytes_per_sec": 0 00:08:54.569 }, 00:08:54.569 "claimed": true, 00:08:54.569 "claim_type": "exclusive_write", 00:08:54.569 "zoned": false, 00:08:54.569 "supported_io_types": { 00:08:54.569 "read": true, 00:08:54.569 "write": true, 00:08:54.569 "unmap": true, 00:08:54.569 "flush": true, 00:08:54.569 "reset": true, 00:08:54.569 "nvme_admin": false, 00:08:54.569 "nvme_io": false, 00:08:54.569 "nvme_io_md": false, 00:08:54.569 "write_zeroes": true, 00:08:54.569 "zcopy": true, 00:08:54.569 "get_zone_info": false, 00:08:54.569 "zone_management": false, 00:08:54.569 "zone_append": false, 00:08:54.569 "compare": false, 00:08:54.569 "compare_and_write": false, 00:08:54.569 "abort": true, 00:08:54.569 "seek_hole": false, 00:08:54.569 "seek_data": false, 00:08:54.569 "copy": true, 00:08:54.569 "nvme_iov_md": false 00:08:54.569 }, 00:08:54.569 "memory_domains": [ 00:08:54.569 { 00:08:54.569 "dma_device_id": "system", 00:08:54.569 "dma_device_type": 1 00:08:54.569 }, 00:08:54.569 { 00:08:54.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.569 "dma_device_type": 2 00:08:54.569 } 00:08:54.569 ], 00:08:54.569 "driver_specific": {} 00:08:54.569 } 00:08:54.569 ] 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.569 "name": "Existed_Raid", 00:08:54.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.569 "strip_size_kb": 64, 00:08:54.569 "state": "configuring", 00:08:54.569 "raid_level": "raid0", 00:08:54.569 "superblock": false, 00:08:54.569 "num_base_bdevs": 3, 00:08:54.569 "num_base_bdevs_discovered": 2, 00:08:54.569 "num_base_bdevs_operational": 3, 00:08:54.569 "base_bdevs_list": [ 00:08:54.569 { 00:08:54.569 "name": "BaseBdev1", 00:08:54.569 "uuid": "4f34eef1-2c55-4003-9d0f-b03afffdf503", 00:08:54.569 "is_configured": true, 00:08:54.569 "data_offset": 0, 00:08:54.569 "data_size": 65536 00:08:54.569 }, 00:08:54.569 { 00:08:54.569 "name": null, 00:08:54.569 "uuid": "4d2ec692-4309-4518-9a7b-241f6d9f24a2", 00:08:54.569 "is_configured": false, 00:08:54.569 "data_offset": 0, 00:08:54.569 "data_size": 65536 00:08:54.569 }, 00:08:54.569 { 00:08:54.569 "name": "BaseBdev3", 00:08:54.569 "uuid": "ed007d28-9168-4213-8a81-e49b8c57f0b1", 00:08:54.569 "is_configured": true, 00:08:54.569 "data_offset": 0, 00:08:54.569 "data_size": 65536 00:08:54.569 } 00:08:54.569 ] 00:08:54.569 }' 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.569 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.139 [2024-10-21 09:53:31.555860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.139 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.139 "name": "Existed_Raid", 00:08:55.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.140 "strip_size_kb": 64, 00:08:55.140 "state": "configuring", 00:08:55.140 "raid_level": "raid0", 00:08:55.140 "superblock": false, 00:08:55.140 "num_base_bdevs": 3, 00:08:55.140 "num_base_bdevs_discovered": 1, 00:08:55.140 "num_base_bdevs_operational": 3, 00:08:55.140 "base_bdevs_list": [ 00:08:55.140 { 00:08:55.140 "name": "BaseBdev1", 00:08:55.140 "uuid": "4f34eef1-2c55-4003-9d0f-b03afffdf503", 00:08:55.140 "is_configured": true, 00:08:55.140 "data_offset": 0, 00:08:55.140 "data_size": 65536 00:08:55.140 }, 00:08:55.140 { 00:08:55.140 "name": null, 00:08:55.140 "uuid": "4d2ec692-4309-4518-9a7b-241f6d9f24a2", 00:08:55.140 "is_configured": false, 00:08:55.140 "data_offset": 0, 00:08:55.140 "data_size": 65536 00:08:55.140 }, 00:08:55.140 { 00:08:55.140 "name": null, 00:08:55.140 "uuid": "ed007d28-9168-4213-8a81-e49b8c57f0b1", 00:08:55.140 "is_configured": false, 00:08:55.140 "data_offset": 0, 00:08:55.140 "data_size": 65536 00:08:55.140 } 00:08:55.140 ] 00:08:55.140 }' 00:08:55.140 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.140 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.399 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.399 09:53:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.399 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.399 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.399 09:53:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.658 [2024-10-21 09:53:32.019142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.658 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.658 "name": "Existed_Raid", 00:08:55.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.658 "strip_size_kb": 64, 00:08:55.658 "state": "configuring", 00:08:55.658 "raid_level": "raid0", 00:08:55.658 "superblock": false, 00:08:55.658 "num_base_bdevs": 3, 00:08:55.658 "num_base_bdevs_discovered": 2, 00:08:55.658 "num_base_bdevs_operational": 3, 00:08:55.658 "base_bdevs_list": [ 00:08:55.658 { 00:08:55.658 "name": "BaseBdev1", 00:08:55.658 "uuid": "4f34eef1-2c55-4003-9d0f-b03afffdf503", 00:08:55.658 "is_configured": true, 00:08:55.659 "data_offset": 0, 00:08:55.659 "data_size": 65536 00:08:55.659 }, 00:08:55.659 { 00:08:55.659 "name": null, 00:08:55.659 "uuid": "4d2ec692-4309-4518-9a7b-241f6d9f24a2", 00:08:55.659 "is_configured": false, 00:08:55.659 "data_offset": 0, 00:08:55.659 "data_size": 65536 00:08:55.659 }, 00:08:55.659 { 00:08:55.659 "name": "BaseBdev3", 00:08:55.659 "uuid": "ed007d28-9168-4213-8a81-e49b8c57f0b1", 00:08:55.659 "is_configured": true, 00:08:55.659 "data_offset": 0, 00:08:55.659 "data_size": 65536 00:08:55.659 } 00:08:55.659 ] 00:08:55.659 }' 00:08:55.659 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.659 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.918 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.918 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.918 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.176 [2024-10-21 09:53:32.534589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.176 "name": "Existed_Raid", 00:08:56.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.176 "strip_size_kb": 64, 00:08:56.176 "state": "configuring", 00:08:56.176 "raid_level": "raid0", 00:08:56.176 "superblock": false, 00:08:56.176 "num_base_bdevs": 3, 00:08:56.176 "num_base_bdevs_discovered": 1, 00:08:56.176 "num_base_bdevs_operational": 3, 00:08:56.176 "base_bdevs_list": [ 00:08:56.176 { 00:08:56.176 "name": null, 00:08:56.176 "uuid": "4f34eef1-2c55-4003-9d0f-b03afffdf503", 00:08:56.176 "is_configured": false, 00:08:56.176 "data_offset": 0, 00:08:56.176 "data_size": 65536 00:08:56.176 }, 00:08:56.176 { 00:08:56.176 "name": null, 00:08:56.176 "uuid": "4d2ec692-4309-4518-9a7b-241f6d9f24a2", 00:08:56.176 "is_configured": false, 00:08:56.176 "data_offset": 0, 00:08:56.176 "data_size": 65536 00:08:56.176 }, 00:08:56.176 { 00:08:56.176 "name": "BaseBdev3", 00:08:56.176 "uuid": "ed007d28-9168-4213-8a81-e49b8c57f0b1", 00:08:56.176 "is_configured": true, 00:08:56.176 "data_offset": 0, 00:08:56.176 "data_size": 65536 00:08:56.176 } 00:08:56.176 ] 00:08:56.176 }' 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.176 09:53:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.743 [2024-10-21 09:53:33.177374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.743 "name": "Existed_Raid", 00:08:56.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.743 "strip_size_kb": 64, 00:08:56.743 "state": "configuring", 00:08:56.743 "raid_level": "raid0", 00:08:56.743 "superblock": false, 00:08:56.743 "num_base_bdevs": 3, 00:08:56.743 "num_base_bdevs_discovered": 2, 00:08:56.743 "num_base_bdevs_operational": 3, 00:08:56.743 "base_bdevs_list": [ 00:08:56.743 { 00:08:56.743 "name": null, 00:08:56.743 "uuid": "4f34eef1-2c55-4003-9d0f-b03afffdf503", 00:08:56.743 "is_configured": false, 00:08:56.743 "data_offset": 0, 00:08:56.743 "data_size": 65536 00:08:56.743 }, 00:08:56.743 { 00:08:56.743 "name": "BaseBdev2", 00:08:56.743 "uuid": "4d2ec692-4309-4518-9a7b-241f6d9f24a2", 00:08:56.743 "is_configured": true, 00:08:56.743 "data_offset": 0, 00:08:56.743 "data_size": 65536 00:08:56.743 }, 00:08:56.743 { 00:08:56.743 "name": "BaseBdev3", 00:08:56.743 "uuid": "ed007d28-9168-4213-8a81-e49b8c57f0b1", 00:08:56.743 "is_configured": true, 00:08:56.743 "data_offset": 0, 00:08:56.743 "data_size": 65536 00:08:56.743 } 00:08:56.743 ] 00:08:56.743 }' 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.743 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4f34eef1-2c55-4003-9d0f-b03afffdf503 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.314 [2024-10-21 09:53:33.769415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:57.314 [2024-10-21 09:53:33.769549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:08:57.314 [2024-10-21 09:53:33.769582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:57.314 [2024-10-21 09:53:33.769884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:57.314 [2024-10-21 09:53:33.770057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:08:57.314 [2024-10-21 09:53:33.770067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:08:57.314 [2024-10-21 09:53:33.770343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.314 NewBaseBdev 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.314 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.315 [ 00:08:57.315 { 00:08:57.315 "name": "NewBaseBdev", 00:08:57.315 "aliases": [ 00:08:57.315 "4f34eef1-2c55-4003-9d0f-b03afffdf503" 00:08:57.315 ], 00:08:57.315 "product_name": "Malloc disk", 00:08:57.315 "block_size": 512, 00:08:57.315 "num_blocks": 65536, 00:08:57.315 "uuid": "4f34eef1-2c55-4003-9d0f-b03afffdf503", 00:08:57.315 "assigned_rate_limits": { 00:08:57.315 "rw_ios_per_sec": 0, 00:08:57.315 "rw_mbytes_per_sec": 0, 00:08:57.315 "r_mbytes_per_sec": 0, 00:08:57.315 "w_mbytes_per_sec": 0 00:08:57.315 }, 00:08:57.315 "claimed": true, 00:08:57.315 "claim_type": "exclusive_write", 00:08:57.315 "zoned": false, 00:08:57.315 "supported_io_types": { 00:08:57.315 "read": true, 00:08:57.315 "write": true, 00:08:57.315 "unmap": true, 00:08:57.315 "flush": true, 00:08:57.315 "reset": true, 00:08:57.315 "nvme_admin": false, 00:08:57.315 "nvme_io": false, 00:08:57.315 "nvme_io_md": false, 00:08:57.315 "write_zeroes": true, 00:08:57.315 "zcopy": true, 00:08:57.315 "get_zone_info": false, 00:08:57.315 "zone_management": false, 00:08:57.315 "zone_append": false, 00:08:57.315 "compare": false, 00:08:57.315 "compare_and_write": false, 00:08:57.315 "abort": true, 00:08:57.315 "seek_hole": false, 00:08:57.315 "seek_data": false, 00:08:57.315 "copy": true, 00:08:57.315 "nvme_iov_md": false 00:08:57.315 }, 00:08:57.315 "memory_domains": [ 00:08:57.315 { 00:08:57.315 "dma_device_id": "system", 00:08:57.315 "dma_device_type": 1 00:08:57.315 }, 00:08:57.315 { 00:08:57.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.315 "dma_device_type": 2 00:08:57.315 } 00:08:57.315 ], 00:08:57.315 "driver_specific": {} 00:08:57.315 } 00:08:57.315 ] 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.315 "name": "Existed_Raid", 00:08:57.315 "uuid": "7d5f966e-d61a-478d-999c-e7c28d928d0a", 00:08:57.315 "strip_size_kb": 64, 00:08:57.315 "state": "online", 00:08:57.315 "raid_level": "raid0", 00:08:57.315 "superblock": false, 00:08:57.315 "num_base_bdevs": 3, 00:08:57.315 "num_base_bdevs_discovered": 3, 00:08:57.315 "num_base_bdevs_operational": 3, 00:08:57.315 "base_bdevs_list": [ 00:08:57.315 { 00:08:57.315 "name": "NewBaseBdev", 00:08:57.315 "uuid": "4f34eef1-2c55-4003-9d0f-b03afffdf503", 00:08:57.315 "is_configured": true, 00:08:57.315 "data_offset": 0, 00:08:57.315 "data_size": 65536 00:08:57.315 }, 00:08:57.315 { 00:08:57.315 "name": "BaseBdev2", 00:08:57.315 "uuid": "4d2ec692-4309-4518-9a7b-241f6d9f24a2", 00:08:57.315 "is_configured": true, 00:08:57.315 "data_offset": 0, 00:08:57.315 "data_size": 65536 00:08:57.315 }, 00:08:57.315 { 00:08:57.315 "name": "BaseBdev3", 00:08:57.315 "uuid": "ed007d28-9168-4213-8a81-e49b8c57f0b1", 00:08:57.315 "is_configured": true, 00:08:57.315 "data_offset": 0, 00:08:57.315 "data_size": 65536 00:08:57.315 } 00:08:57.315 ] 00:08:57.315 }' 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.315 09:53:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.889 [2024-10-21 09:53:34.276901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.889 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.889 "name": "Existed_Raid", 00:08:57.889 "aliases": [ 00:08:57.889 "7d5f966e-d61a-478d-999c-e7c28d928d0a" 00:08:57.889 ], 00:08:57.889 "product_name": "Raid Volume", 00:08:57.889 "block_size": 512, 00:08:57.889 "num_blocks": 196608, 00:08:57.889 "uuid": "7d5f966e-d61a-478d-999c-e7c28d928d0a", 00:08:57.889 "assigned_rate_limits": { 00:08:57.889 "rw_ios_per_sec": 0, 00:08:57.889 "rw_mbytes_per_sec": 0, 00:08:57.889 "r_mbytes_per_sec": 0, 00:08:57.889 "w_mbytes_per_sec": 0 00:08:57.889 }, 00:08:57.889 "claimed": false, 00:08:57.889 "zoned": false, 00:08:57.889 "supported_io_types": { 00:08:57.889 "read": true, 00:08:57.889 "write": true, 00:08:57.889 "unmap": true, 00:08:57.889 "flush": true, 00:08:57.889 "reset": true, 00:08:57.889 "nvme_admin": false, 00:08:57.889 "nvme_io": false, 00:08:57.889 "nvme_io_md": false, 00:08:57.889 "write_zeroes": true, 00:08:57.889 "zcopy": false, 00:08:57.889 "get_zone_info": false, 00:08:57.889 "zone_management": false, 00:08:57.889 "zone_append": false, 00:08:57.889 "compare": false, 00:08:57.889 "compare_and_write": false, 00:08:57.889 "abort": false, 00:08:57.889 "seek_hole": false, 00:08:57.889 "seek_data": false, 00:08:57.889 "copy": false, 00:08:57.889 "nvme_iov_md": false 00:08:57.889 }, 00:08:57.889 "memory_domains": [ 00:08:57.889 { 00:08:57.889 "dma_device_id": "system", 00:08:57.889 "dma_device_type": 1 00:08:57.889 }, 00:08:57.889 { 00:08:57.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.889 "dma_device_type": 2 00:08:57.889 }, 00:08:57.889 { 00:08:57.889 "dma_device_id": "system", 00:08:57.889 "dma_device_type": 1 00:08:57.889 }, 00:08:57.889 { 00:08:57.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.890 "dma_device_type": 2 00:08:57.890 }, 00:08:57.890 { 00:08:57.890 "dma_device_id": "system", 00:08:57.890 "dma_device_type": 1 00:08:57.890 }, 00:08:57.890 { 00:08:57.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.890 "dma_device_type": 2 00:08:57.890 } 00:08:57.890 ], 00:08:57.890 "driver_specific": { 00:08:57.890 "raid": { 00:08:57.890 "uuid": "7d5f966e-d61a-478d-999c-e7c28d928d0a", 00:08:57.890 "strip_size_kb": 64, 00:08:57.890 "state": "online", 00:08:57.890 "raid_level": "raid0", 00:08:57.890 "superblock": false, 00:08:57.890 "num_base_bdevs": 3, 00:08:57.890 "num_base_bdevs_discovered": 3, 00:08:57.890 "num_base_bdevs_operational": 3, 00:08:57.890 "base_bdevs_list": [ 00:08:57.890 { 00:08:57.890 "name": "NewBaseBdev", 00:08:57.890 "uuid": "4f34eef1-2c55-4003-9d0f-b03afffdf503", 00:08:57.890 "is_configured": true, 00:08:57.890 "data_offset": 0, 00:08:57.890 "data_size": 65536 00:08:57.890 }, 00:08:57.890 { 00:08:57.890 "name": "BaseBdev2", 00:08:57.890 "uuid": "4d2ec692-4309-4518-9a7b-241f6d9f24a2", 00:08:57.890 "is_configured": true, 00:08:57.890 "data_offset": 0, 00:08:57.890 "data_size": 65536 00:08:57.890 }, 00:08:57.890 { 00:08:57.890 "name": "BaseBdev3", 00:08:57.890 "uuid": "ed007d28-9168-4213-8a81-e49b8c57f0b1", 00:08:57.890 "is_configured": true, 00:08:57.890 "data_offset": 0, 00:08:57.890 "data_size": 65536 00:08:57.890 } 00:08:57.890 ] 00:08:57.890 } 00:08:57.890 } 00:08:57.890 }' 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:57.890 BaseBdev2 00:08:57.890 BaseBdev3' 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.890 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.149 [2024-10-21 09:53:34.572071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.149 [2024-10-21 09:53:34.572109] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.149 [2024-10-21 09:53:34.572192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.149 [2024-10-21 09:53:34.572254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.149 [2024-10-21 09:53:34.572267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63395 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63395 ']' 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63395 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63395 00:08:58.149 killing process with pid 63395 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63395' 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63395 00:08:58.149 [2024-10-21 09:53:34.607389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.149 09:53:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63395 00:08:58.409 [2024-10-21 09:53:34.942660] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.788 ************************************ 00:08:59.788 END TEST raid_state_function_test 00:08:59.788 ************************************ 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:59.788 00:08:59.788 real 0m10.938s 00:08:59.788 user 0m17.284s 00:08:59.788 sys 0m1.874s 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.788 09:53:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:59.788 09:53:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:59.788 09:53:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.788 09:53:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.788 ************************************ 00:08:59.788 START TEST raid_state_function_test_sb 00:08:59.788 ************************************ 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64023 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64023' 00:08:59.788 Process raid pid: 64023 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64023 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64023 ']' 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.788 09:53:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.788 [2024-10-21 09:53:36.281397] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:08:59.788 [2024-10-21 09:53:36.281532] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.048 [2024-10-21 09:53:36.445703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.048 [2024-10-21 09:53:36.591691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.310 [2024-10-21 09:53:36.840303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.310 [2024-10-21 09:53:36.840355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.569 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.569 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:00.569 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.569 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.569 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.569 [2024-10-21 09:53:37.118839] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.569 [2024-10-21 09:53:37.118903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.569 [2024-10-21 09:53:37.118912] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.569 [2024-10-21 09:53:37.118922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.569 [2024-10-21 09:53:37.118927] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.569 [2024-10-21 09:53:37.118936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.569 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.569 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.569 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.569 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.570 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.829 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.829 "name": "Existed_Raid", 00:09:00.829 "uuid": "ec3a93e7-d0ba-4c95-b9cd-7fb23a6026b0", 00:09:00.829 "strip_size_kb": 64, 00:09:00.829 "state": "configuring", 00:09:00.829 "raid_level": "raid0", 00:09:00.829 "superblock": true, 00:09:00.829 "num_base_bdevs": 3, 00:09:00.829 "num_base_bdevs_discovered": 0, 00:09:00.829 "num_base_bdevs_operational": 3, 00:09:00.829 "base_bdevs_list": [ 00:09:00.829 { 00:09:00.829 "name": "BaseBdev1", 00:09:00.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.829 "is_configured": false, 00:09:00.829 "data_offset": 0, 00:09:00.829 "data_size": 0 00:09:00.829 }, 00:09:00.829 { 00:09:00.829 "name": "BaseBdev2", 00:09:00.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.829 "is_configured": false, 00:09:00.829 "data_offset": 0, 00:09:00.829 "data_size": 0 00:09:00.829 }, 00:09:00.829 { 00:09:00.829 "name": "BaseBdev3", 00:09:00.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.829 "is_configured": false, 00:09:00.829 "data_offset": 0, 00:09:00.829 "data_size": 0 00:09:00.829 } 00:09:00.829 ] 00:09:00.829 }' 00:09:00.829 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.829 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.089 [2024-10-21 09:53:37.518143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.089 [2024-10-21 09:53:37.518313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.089 [2024-10-21 09:53:37.530113] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.089 [2024-10-21 09:53:37.530161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.089 [2024-10-21 09:53:37.530170] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.089 [2024-10-21 09:53:37.530180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.089 [2024-10-21 09:53:37.530186] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.089 [2024-10-21 09:53:37.530195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.089 [2024-10-21 09:53:37.583072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.089 BaseBdev1 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:01.089 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.090 [ 00:09:01.090 { 00:09:01.090 "name": "BaseBdev1", 00:09:01.090 "aliases": [ 00:09:01.090 "2e8bec49-4938-4082-a011-b626a1bc95c7" 00:09:01.090 ], 00:09:01.090 "product_name": "Malloc disk", 00:09:01.090 "block_size": 512, 00:09:01.090 "num_blocks": 65536, 00:09:01.090 "uuid": "2e8bec49-4938-4082-a011-b626a1bc95c7", 00:09:01.090 "assigned_rate_limits": { 00:09:01.090 "rw_ios_per_sec": 0, 00:09:01.090 "rw_mbytes_per_sec": 0, 00:09:01.090 "r_mbytes_per_sec": 0, 00:09:01.090 "w_mbytes_per_sec": 0 00:09:01.090 }, 00:09:01.090 "claimed": true, 00:09:01.090 "claim_type": "exclusive_write", 00:09:01.090 "zoned": false, 00:09:01.090 "supported_io_types": { 00:09:01.090 "read": true, 00:09:01.090 "write": true, 00:09:01.090 "unmap": true, 00:09:01.090 "flush": true, 00:09:01.090 "reset": true, 00:09:01.090 "nvme_admin": false, 00:09:01.090 "nvme_io": false, 00:09:01.090 "nvme_io_md": false, 00:09:01.090 "write_zeroes": true, 00:09:01.090 "zcopy": true, 00:09:01.090 "get_zone_info": false, 00:09:01.090 "zone_management": false, 00:09:01.090 "zone_append": false, 00:09:01.090 "compare": false, 00:09:01.090 "compare_and_write": false, 00:09:01.090 "abort": true, 00:09:01.090 "seek_hole": false, 00:09:01.090 "seek_data": false, 00:09:01.090 "copy": true, 00:09:01.090 "nvme_iov_md": false 00:09:01.090 }, 00:09:01.090 "memory_domains": [ 00:09:01.090 { 00:09:01.090 "dma_device_id": "system", 00:09:01.090 "dma_device_type": 1 00:09:01.090 }, 00:09:01.090 { 00:09:01.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.090 "dma_device_type": 2 00:09:01.090 } 00:09:01.090 ], 00:09:01.090 "driver_specific": {} 00:09:01.090 } 00:09:01.090 ] 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.090 "name": "Existed_Raid", 00:09:01.090 "uuid": "a30db2b7-0b8f-45bd-8e2a-62a085f4a12f", 00:09:01.090 "strip_size_kb": 64, 00:09:01.090 "state": "configuring", 00:09:01.090 "raid_level": "raid0", 00:09:01.090 "superblock": true, 00:09:01.090 "num_base_bdevs": 3, 00:09:01.090 "num_base_bdevs_discovered": 1, 00:09:01.090 "num_base_bdevs_operational": 3, 00:09:01.090 "base_bdevs_list": [ 00:09:01.090 { 00:09:01.090 "name": "BaseBdev1", 00:09:01.090 "uuid": "2e8bec49-4938-4082-a011-b626a1bc95c7", 00:09:01.090 "is_configured": true, 00:09:01.090 "data_offset": 2048, 00:09:01.090 "data_size": 63488 00:09:01.090 }, 00:09:01.090 { 00:09:01.090 "name": "BaseBdev2", 00:09:01.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.090 "is_configured": false, 00:09:01.090 "data_offset": 0, 00:09:01.090 "data_size": 0 00:09:01.090 }, 00:09:01.090 { 00:09:01.090 "name": "BaseBdev3", 00:09:01.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.090 "is_configured": false, 00:09:01.090 "data_offset": 0, 00:09:01.090 "data_size": 0 00:09:01.090 } 00:09:01.090 ] 00:09:01.090 }' 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.090 09:53:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.658 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.658 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.658 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.658 [2024-10-21 09:53:38.098296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.658 [2024-10-21 09:53:38.098379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:09:01.658 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.658 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.658 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.658 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.658 [2024-10-21 09:53:38.110297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.658 [2024-10-21 09:53:38.112406] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.659 [2024-10-21 09:53:38.112451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.659 [2024-10-21 09:53:38.112461] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.659 [2024-10-21 09:53:38.112469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.659 "name": "Existed_Raid", 00:09:01.659 "uuid": "cb706898-b7aa-4858-b276-0db53cfe752b", 00:09:01.659 "strip_size_kb": 64, 00:09:01.659 "state": "configuring", 00:09:01.659 "raid_level": "raid0", 00:09:01.659 "superblock": true, 00:09:01.659 "num_base_bdevs": 3, 00:09:01.659 "num_base_bdevs_discovered": 1, 00:09:01.659 "num_base_bdevs_operational": 3, 00:09:01.659 "base_bdevs_list": [ 00:09:01.659 { 00:09:01.659 "name": "BaseBdev1", 00:09:01.659 "uuid": "2e8bec49-4938-4082-a011-b626a1bc95c7", 00:09:01.659 "is_configured": true, 00:09:01.659 "data_offset": 2048, 00:09:01.659 "data_size": 63488 00:09:01.659 }, 00:09:01.659 { 00:09:01.659 "name": "BaseBdev2", 00:09:01.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.659 "is_configured": false, 00:09:01.659 "data_offset": 0, 00:09:01.659 "data_size": 0 00:09:01.659 }, 00:09:01.659 { 00:09:01.659 "name": "BaseBdev3", 00:09:01.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.659 "is_configured": false, 00:09:01.659 "data_offset": 0, 00:09:01.659 "data_size": 0 00:09:01.659 } 00:09:01.659 ] 00:09:01.659 }' 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.659 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.227 [2024-10-21 09:53:38.590274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.227 BaseBdev2 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.227 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.227 [ 00:09:02.227 { 00:09:02.227 "name": "BaseBdev2", 00:09:02.227 "aliases": [ 00:09:02.227 "6b4e6c8e-b751-4c18-b590-ad4b7363901a" 00:09:02.227 ], 00:09:02.227 "product_name": "Malloc disk", 00:09:02.227 "block_size": 512, 00:09:02.227 "num_blocks": 65536, 00:09:02.227 "uuid": "6b4e6c8e-b751-4c18-b590-ad4b7363901a", 00:09:02.227 "assigned_rate_limits": { 00:09:02.227 "rw_ios_per_sec": 0, 00:09:02.227 "rw_mbytes_per_sec": 0, 00:09:02.227 "r_mbytes_per_sec": 0, 00:09:02.227 "w_mbytes_per_sec": 0 00:09:02.227 }, 00:09:02.227 "claimed": true, 00:09:02.227 "claim_type": "exclusive_write", 00:09:02.227 "zoned": false, 00:09:02.227 "supported_io_types": { 00:09:02.227 "read": true, 00:09:02.227 "write": true, 00:09:02.227 "unmap": true, 00:09:02.227 "flush": true, 00:09:02.227 "reset": true, 00:09:02.227 "nvme_admin": false, 00:09:02.227 "nvme_io": false, 00:09:02.227 "nvme_io_md": false, 00:09:02.227 "write_zeroes": true, 00:09:02.227 "zcopy": true, 00:09:02.227 "get_zone_info": false, 00:09:02.227 "zone_management": false, 00:09:02.227 "zone_append": false, 00:09:02.227 "compare": false, 00:09:02.227 "compare_and_write": false, 00:09:02.227 "abort": true, 00:09:02.227 "seek_hole": false, 00:09:02.227 "seek_data": false, 00:09:02.228 "copy": true, 00:09:02.228 "nvme_iov_md": false 00:09:02.228 }, 00:09:02.228 "memory_domains": [ 00:09:02.228 { 00:09:02.228 "dma_device_id": "system", 00:09:02.228 "dma_device_type": 1 00:09:02.228 }, 00:09:02.228 { 00:09:02.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.228 "dma_device_type": 2 00:09:02.228 } 00:09:02.228 ], 00:09:02.228 "driver_specific": {} 00:09:02.228 } 00:09:02.228 ] 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.228 "name": "Existed_Raid", 00:09:02.228 "uuid": "cb706898-b7aa-4858-b276-0db53cfe752b", 00:09:02.228 "strip_size_kb": 64, 00:09:02.228 "state": "configuring", 00:09:02.228 "raid_level": "raid0", 00:09:02.228 "superblock": true, 00:09:02.228 "num_base_bdevs": 3, 00:09:02.228 "num_base_bdevs_discovered": 2, 00:09:02.228 "num_base_bdevs_operational": 3, 00:09:02.228 "base_bdevs_list": [ 00:09:02.228 { 00:09:02.228 "name": "BaseBdev1", 00:09:02.228 "uuid": "2e8bec49-4938-4082-a011-b626a1bc95c7", 00:09:02.228 "is_configured": true, 00:09:02.228 "data_offset": 2048, 00:09:02.228 "data_size": 63488 00:09:02.228 }, 00:09:02.228 { 00:09:02.228 "name": "BaseBdev2", 00:09:02.228 "uuid": "6b4e6c8e-b751-4c18-b590-ad4b7363901a", 00:09:02.228 "is_configured": true, 00:09:02.228 "data_offset": 2048, 00:09:02.228 "data_size": 63488 00:09:02.228 }, 00:09:02.228 { 00:09:02.228 "name": "BaseBdev3", 00:09:02.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.228 "is_configured": false, 00:09:02.228 "data_offset": 0, 00:09:02.228 "data_size": 0 00:09:02.228 } 00:09:02.228 ] 00:09:02.228 }' 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.228 09:53:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.486 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.486 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.486 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.745 [2024-10-21 09:53:39.113172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.745 [2024-10-21 09:53:39.113559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:02.745 [2024-10-21 09:53:39.113641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.745 [2024-10-21 09:53:39.113959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:02.745 BaseBdev3 00:09:02.745 [2024-10-21 09:53:39.114196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:02.745 [2024-10-21 09:53:39.114247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:09:02.745 [2024-10-21 09:53:39.114459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.745 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.745 [ 00:09:02.745 { 00:09:02.745 "name": "BaseBdev3", 00:09:02.745 "aliases": [ 00:09:02.745 "ea857bd4-19b3-4e80-a11e-f76da9db60b6" 00:09:02.745 ], 00:09:02.745 "product_name": "Malloc disk", 00:09:02.745 "block_size": 512, 00:09:02.745 "num_blocks": 65536, 00:09:02.745 "uuid": "ea857bd4-19b3-4e80-a11e-f76da9db60b6", 00:09:02.745 "assigned_rate_limits": { 00:09:02.745 "rw_ios_per_sec": 0, 00:09:02.745 "rw_mbytes_per_sec": 0, 00:09:02.745 "r_mbytes_per_sec": 0, 00:09:02.746 "w_mbytes_per_sec": 0 00:09:02.746 }, 00:09:02.746 "claimed": true, 00:09:02.746 "claim_type": "exclusive_write", 00:09:02.746 "zoned": false, 00:09:02.746 "supported_io_types": { 00:09:02.746 "read": true, 00:09:02.746 "write": true, 00:09:02.746 "unmap": true, 00:09:02.746 "flush": true, 00:09:02.746 "reset": true, 00:09:02.746 "nvme_admin": false, 00:09:02.746 "nvme_io": false, 00:09:02.746 "nvme_io_md": false, 00:09:02.746 "write_zeroes": true, 00:09:02.746 "zcopy": true, 00:09:02.746 "get_zone_info": false, 00:09:02.746 "zone_management": false, 00:09:02.746 "zone_append": false, 00:09:02.746 "compare": false, 00:09:02.746 "compare_and_write": false, 00:09:02.746 "abort": true, 00:09:02.746 "seek_hole": false, 00:09:02.746 "seek_data": false, 00:09:02.746 "copy": true, 00:09:02.746 "nvme_iov_md": false 00:09:02.746 }, 00:09:02.746 "memory_domains": [ 00:09:02.746 { 00:09:02.746 "dma_device_id": "system", 00:09:02.746 "dma_device_type": 1 00:09:02.746 }, 00:09:02.746 { 00:09:02.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.746 "dma_device_type": 2 00:09:02.746 } 00:09:02.746 ], 00:09:02.746 "driver_specific": {} 00:09:02.746 } 00:09:02.746 ] 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.746 "name": "Existed_Raid", 00:09:02.746 "uuid": "cb706898-b7aa-4858-b276-0db53cfe752b", 00:09:02.746 "strip_size_kb": 64, 00:09:02.746 "state": "online", 00:09:02.746 "raid_level": "raid0", 00:09:02.746 "superblock": true, 00:09:02.746 "num_base_bdevs": 3, 00:09:02.746 "num_base_bdevs_discovered": 3, 00:09:02.746 "num_base_bdevs_operational": 3, 00:09:02.746 "base_bdevs_list": [ 00:09:02.746 { 00:09:02.746 "name": "BaseBdev1", 00:09:02.746 "uuid": "2e8bec49-4938-4082-a011-b626a1bc95c7", 00:09:02.746 "is_configured": true, 00:09:02.746 "data_offset": 2048, 00:09:02.746 "data_size": 63488 00:09:02.746 }, 00:09:02.746 { 00:09:02.746 "name": "BaseBdev2", 00:09:02.746 "uuid": "6b4e6c8e-b751-4c18-b590-ad4b7363901a", 00:09:02.746 "is_configured": true, 00:09:02.746 "data_offset": 2048, 00:09:02.746 "data_size": 63488 00:09:02.746 }, 00:09:02.746 { 00:09:02.746 "name": "BaseBdev3", 00:09:02.746 "uuid": "ea857bd4-19b3-4e80-a11e-f76da9db60b6", 00:09:02.746 "is_configured": true, 00:09:02.746 "data_offset": 2048, 00:09:02.746 "data_size": 63488 00:09:02.746 } 00:09:02.746 ] 00:09:02.746 }' 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.746 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.314 [2024-10-21 09:53:39.632739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.314 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.314 "name": "Existed_Raid", 00:09:03.314 "aliases": [ 00:09:03.314 "cb706898-b7aa-4858-b276-0db53cfe752b" 00:09:03.314 ], 00:09:03.314 "product_name": "Raid Volume", 00:09:03.314 "block_size": 512, 00:09:03.314 "num_blocks": 190464, 00:09:03.314 "uuid": "cb706898-b7aa-4858-b276-0db53cfe752b", 00:09:03.314 "assigned_rate_limits": { 00:09:03.314 "rw_ios_per_sec": 0, 00:09:03.314 "rw_mbytes_per_sec": 0, 00:09:03.314 "r_mbytes_per_sec": 0, 00:09:03.314 "w_mbytes_per_sec": 0 00:09:03.314 }, 00:09:03.314 "claimed": false, 00:09:03.314 "zoned": false, 00:09:03.314 "supported_io_types": { 00:09:03.314 "read": true, 00:09:03.314 "write": true, 00:09:03.314 "unmap": true, 00:09:03.314 "flush": true, 00:09:03.314 "reset": true, 00:09:03.314 "nvme_admin": false, 00:09:03.314 "nvme_io": false, 00:09:03.314 "nvme_io_md": false, 00:09:03.314 "write_zeroes": true, 00:09:03.314 "zcopy": false, 00:09:03.314 "get_zone_info": false, 00:09:03.314 "zone_management": false, 00:09:03.314 "zone_append": false, 00:09:03.314 "compare": false, 00:09:03.314 "compare_and_write": false, 00:09:03.314 "abort": false, 00:09:03.314 "seek_hole": false, 00:09:03.314 "seek_data": false, 00:09:03.314 "copy": false, 00:09:03.314 "nvme_iov_md": false 00:09:03.314 }, 00:09:03.314 "memory_domains": [ 00:09:03.314 { 00:09:03.314 "dma_device_id": "system", 00:09:03.314 "dma_device_type": 1 00:09:03.314 }, 00:09:03.314 { 00:09:03.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.314 "dma_device_type": 2 00:09:03.314 }, 00:09:03.314 { 00:09:03.314 "dma_device_id": "system", 00:09:03.314 "dma_device_type": 1 00:09:03.314 }, 00:09:03.314 { 00:09:03.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.314 "dma_device_type": 2 00:09:03.314 }, 00:09:03.314 { 00:09:03.314 "dma_device_id": "system", 00:09:03.314 "dma_device_type": 1 00:09:03.314 }, 00:09:03.314 { 00:09:03.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.314 "dma_device_type": 2 00:09:03.314 } 00:09:03.314 ], 00:09:03.314 "driver_specific": { 00:09:03.314 "raid": { 00:09:03.314 "uuid": "cb706898-b7aa-4858-b276-0db53cfe752b", 00:09:03.314 "strip_size_kb": 64, 00:09:03.314 "state": "online", 00:09:03.314 "raid_level": "raid0", 00:09:03.314 "superblock": true, 00:09:03.314 "num_base_bdevs": 3, 00:09:03.314 "num_base_bdevs_discovered": 3, 00:09:03.314 "num_base_bdevs_operational": 3, 00:09:03.314 "base_bdevs_list": [ 00:09:03.314 { 00:09:03.314 "name": "BaseBdev1", 00:09:03.314 "uuid": "2e8bec49-4938-4082-a011-b626a1bc95c7", 00:09:03.314 "is_configured": true, 00:09:03.314 "data_offset": 2048, 00:09:03.314 "data_size": 63488 00:09:03.314 }, 00:09:03.314 { 00:09:03.314 "name": "BaseBdev2", 00:09:03.314 "uuid": "6b4e6c8e-b751-4c18-b590-ad4b7363901a", 00:09:03.314 "is_configured": true, 00:09:03.314 "data_offset": 2048, 00:09:03.314 "data_size": 63488 00:09:03.314 }, 00:09:03.315 { 00:09:03.315 "name": "BaseBdev3", 00:09:03.315 "uuid": "ea857bd4-19b3-4e80-a11e-f76da9db60b6", 00:09:03.315 "is_configured": true, 00:09:03.315 "data_offset": 2048, 00:09:03.315 "data_size": 63488 00:09:03.315 } 00:09:03.315 ] 00:09:03.315 } 00:09:03.315 } 00:09:03.315 }' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:03.315 BaseBdev2 00:09:03.315 BaseBdev3' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.315 09:53:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.574 [2024-10-21 09:53:39.911858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.574 [2024-10-21 09:53:39.911903] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.574 [2024-10-21 09:53:39.911963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.574 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.574 "name": "Existed_Raid", 00:09:03.574 "uuid": "cb706898-b7aa-4858-b276-0db53cfe752b", 00:09:03.574 "strip_size_kb": 64, 00:09:03.574 "state": "offline", 00:09:03.574 "raid_level": "raid0", 00:09:03.574 "superblock": true, 00:09:03.574 "num_base_bdevs": 3, 00:09:03.574 "num_base_bdevs_discovered": 2, 00:09:03.574 "num_base_bdevs_operational": 2, 00:09:03.574 "base_bdevs_list": [ 00:09:03.574 { 00:09:03.574 "name": null, 00:09:03.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.574 "is_configured": false, 00:09:03.574 "data_offset": 0, 00:09:03.574 "data_size": 63488 00:09:03.574 }, 00:09:03.574 { 00:09:03.574 "name": "BaseBdev2", 00:09:03.574 "uuid": "6b4e6c8e-b751-4c18-b590-ad4b7363901a", 00:09:03.574 "is_configured": true, 00:09:03.574 "data_offset": 2048, 00:09:03.574 "data_size": 63488 00:09:03.574 }, 00:09:03.574 { 00:09:03.574 "name": "BaseBdev3", 00:09:03.574 "uuid": "ea857bd4-19b3-4e80-a11e-f76da9db60b6", 00:09:03.574 "is_configured": true, 00:09:03.574 "data_offset": 2048, 00:09:03.574 "data_size": 63488 00:09:03.574 } 00:09:03.574 ] 00:09:03.574 }' 00:09:03.575 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.575 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.142 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:04.142 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.142 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 [2024-10-21 09:53:40.465526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 [2024-10-21 09:53:40.627658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.143 [2024-10-21 09:53:40.627817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.143 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.402 BaseBdev2 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.402 [ 00:09:04.402 { 00:09:04.402 "name": "BaseBdev2", 00:09:04.402 "aliases": [ 00:09:04.402 "a0b87574-f521-40e5-94dd-adb052919cac" 00:09:04.402 ], 00:09:04.402 "product_name": "Malloc disk", 00:09:04.402 "block_size": 512, 00:09:04.402 "num_blocks": 65536, 00:09:04.402 "uuid": "a0b87574-f521-40e5-94dd-adb052919cac", 00:09:04.402 "assigned_rate_limits": { 00:09:04.402 "rw_ios_per_sec": 0, 00:09:04.402 "rw_mbytes_per_sec": 0, 00:09:04.402 "r_mbytes_per_sec": 0, 00:09:04.402 "w_mbytes_per_sec": 0 00:09:04.402 }, 00:09:04.402 "claimed": false, 00:09:04.402 "zoned": false, 00:09:04.402 "supported_io_types": { 00:09:04.402 "read": true, 00:09:04.402 "write": true, 00:09:04.402 "unmap": true, 00:09:04.402 "flush": true, 00:09:04.402 "reset": true, 00:09:04.402 "nvme_admin": false, 00:09:04.402 "nvme_io": false, 00:09:04.402 "nvme_io_md": false, 00:09:04.402 "write_zeroes": true, 00:09:04.402 "zcopy": true, 00:09:04.402 "get_zone_info": false, 00:09:04.402 "zone_management": false, 00:09:04.402 "zone_append": false, 00:09:04.402 "compare": false, 00:09:04.402 "compare_and_write": false, 00:09:04.402 "abort": true, 00:09:04.402 "seek_hole": false, 00:09:04.402 "seek_data": false, 00:09:04.402 "copy": true, 00:09:04.402 "nvme_iov_md": false 00:09:04.402 }, 00:09:04.402 "memory_domains": [ 00:09:04.402 { 00:09:04.402 "dma_device_id": "system", 00:09:04.402 "dma_device_type": 1 00:09:04.402 }, 00:09:04.402 { 00:09:04.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.402 "dma_device_type": 2 00:09:04.402 } 00:09:04.402 ], 00:09:04.402 "driver_specific": {} 00:09:04.402 } 00:09:04.402 ] 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.402 BaseBdev3 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.402 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.403 [ 00:09:04.403 { 00:09:04.403 "name": "BaseBdev3", 00:09:04.403 "aliases": [ 00:09:04.403 "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1" 00:09:04.403 ], 00:09:04.403 "product_name": "Malloc disk", 00:09:04.403 "block_size": 512, 00:09:04.403 "num_blocks": 65536, 00:09:04.403 "uuid": "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1", 00:09:04.403 "assigned_rate_limits": { 00:09:04.403 "rw_ios_per_sec": 0, 00:09:04.403 "rw_mbytes_per_sec": 0, 00:09:04.403 "r_mbytes_per_sec": 0, 00:09:04.403 "w_mbytes_per_sec": 0 00:09:04.403 }, 00:09:04.403 "claimed": false, 00:09:04.403 "zoned": false, 00:09:04.403 "supported_io_types": { 00:09:04.403 "read": true, 00:09:04.403 "write": true, 00:09:04.403 "unmap": true, 00:09:04.403 "flush": true, 00:09:04.403 "reset": true, 00:09:04.403 "nvme_admin": false, 00:09:04.403 "nvme_io": false, 00:09:04.403 "nvme_io_md": false, 00:09:04.403 "write_zeroes": true, 00:09:04.403 "zcopy": true, 00:09:04.403 "get_zone_info": false, 00:09:04.403 "zone_management": false, 00:09:04.403 "zone_append": false, 00:09:04.403 "compare": false, 00:09:04.403 "compare_and_write": false, 00:09:04.403 "abort": true, 00:09:04.403 "seek_hole": false, 00:09:04.403 "seek_data": false, 00:09:04.403 "copy": true, 00:09:04.403 "nvme_iov_md": false 00:09:04.403 }, 00:09:04.403 "memory_domains": [ 00:09:04.403 { 00:09:04.403 "dma_device_id": "system", 00:09:04.403 "dma_device_type": 1 00:09:04.403 }, 00:09:04.403 { 00:09:04.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.403 "dma_device_type": 2 00:09:04.403 } 00:09:04.403 ], 00:09:04.403 "driver_specific": {} 00:09:04.403 } 00:09:04.403 ] 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.403 [2024-10-21 09:53:40.976035] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.403 [2024-10-21 09:53:40.976165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.403 [2024-10-21 09:53:40.976193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.403 [2024-10-21 09:53:40.978190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.403 09:53:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.662 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.662 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.662 "name": "Existed_Raid", 00:09:04.662 "uuid": "dee42986-3597-4cd0-a8c5-d9315392531b", 00:09:04.662 "strip_size_kb": 64, 00:09:04.662 "state": "configuring", 00:09:04.662 "raid_level": "raid0", 00:09:04.662 "superblock": true, 00:09:04.662 "num_base_bdevs": 3, 00:09:04.662 "num_base_bdevs_discovered": 2, 00:09:04.662 "num_base_bdevs_operational": 3, 00:09:04.662 "base_bdevs_list": [ 00:09:04.662 { 00:09:04.662 "name": "BaseBdev1", 00:09:04.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.662 "is_configured": false, 00:09:04.662 "data_offset": 0, 00:09:04.662 "data_size": 0 00:09:04.662 }, 00:09:04.662 { 00:09:04.662 "name": "BaseBdev2", 00:09:04.662 "uuid": "a0b87574-f521-40e5-94dd-adb052919cac", 00:09:04.662 "is_configured": true, 00:09:04.662 "data_offset": 2048, 00:09:04.662 "data_size": 63488 00:09:04.662 }, 00:09:04.662 { 00:09:04.662 "name": "BaseBdev3", 00:09:04.662 "uuid": "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1", 00:09:04.662 "is_configured": true, 00:09:04.662 "data_offset": 2048, 00:09:04.662 "data_size": 63488 00:09:04.662 } 00:09:04.662 ] 00:09:04.662 }' 00:09:04.662 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.662 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.921 [2024-10-21 09:53:41.415356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.921 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.921 "name": "Existed_Raid", 00:09:04.921 "uuid": "dee42986-3597-4cd0-a8c5-d9315392531b", 00:09:04.921 "strip_size_kb": 64, 00:09:04.921 "state": "configuring", 00:09:04.921 "raid_level": "raid0", 00:09:04.921 "superblock": true, 00:09:04.921 "num_base_bdevs": 3, 00:09:04.921 "num_base_bdevs_discovered": 1, 00:09:04.921 "num_base_bdevs_operational": 3, 00:09:04.921 "base_bdevs_list": [ 00:09:04.921 { 00:09:04.922 "name": "BaseBdev1", 00:09:04.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.922 "is_configured": false, 00:09:04.922 "data_offset": 0, 00:09:04.922 "data_size": 0 00:09:04.922 }, 00:09:04.922 { 00:09:04.922 "name": null, 00:09:04.922 "uuid": "a0b87574-f521-40e5-94dd-adb052919cac", 00:09:04.922 "is_configured": false, 00:09:04.922 "data_offset": 0, 00:09:04.922 "data_size": 63488 00:09:04.922 }, 00:09:04.922 { 00:09:04.922 "name": "BaseBdev3", 00:09:04.922 "uuid": "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1", 00:09:04.922 "is_configured": true, 00:09:04.922 "data_offset": 2048, 00:09:04.922 "data_size": 63488 00:09:04.922 } 00:09:04.922 ] 00:09:04.922 }' 00:09:04.922 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.922 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.507 [2024-10-21 09:53:41.894727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.507 BaseBdev1 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.507 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.507 [ 00:09:05.507 { 00:09:05.507 "name": "BaseBdev1", 00:09:05.507 "aliases": [ 00:09:05.507 "ad2f8dab-b8cf-4a28-9653-e4be932eef86" 00:09:05.507 ], 00:09:05.507 "product_name": "Malloc disk", 00:09:05.507 "block_size": 512, 00:09:05.507 "num_blocks": 65536, 00:09:05.507 "uuid": "ad2f8dab-b8cf-4a28-9653-e4be932eef86", 00:09:05.507 "assigned_rate_limits": { 00:09:05.507 "rw_ios_per_sec": 0, 00:09:05.507 "rw_mbytes_per_sec": 0, 00:09:05.507 "r_mbytes_per_sec": 0, 00:09:05.507 "w_mbytes_per_sec": 0 00:09:05.507 }, 00:09:05.507 "claimed": true, 00:09:05.507 "claim_type": "exclusive_write", 00:09:05.507 "zoned": false, 00:09:05.507 "supported_io_types": { 00:09:05.507 "read": true, 00:09:05.507 "write": true, 00:09:05.507 "unmap": true, 00:09:05.507 "flush": true, 00:09:05.507 "reset": true, 00:09:05.507 "nvme_admin": false, 00:09:05.507 "nvme_io": false, 00:09:05.507 "nvme_io_md": false, 00:09:05.507 "write_zeroes": true, 00:09:05.507 "zcopy": true, 00:09:05.507 "get_zone_info": false, 00:09:05.507 "zone_management": false, 00:09:05.507 "zone_append": false, 00:09:05.507 "compare": false, 00:09:05.507 "compare_and_write": false, 00:09:05.507 "abort": true, 00:09:05.507 "seek_hole": false, 00:09:05.507 "seek_data": false, 00:09:05.508 "copy": true, 00:09:05.508 "nvme_iov_md": false 00:09:05.508 }, 00:09:05.508 "memory_domains": [ 00:09:05.508 { 00:09:05.508 "dma_device_id": "system", 00:09:05.508 "dma_device_type": 1 00:09:05.508 }, 00:09:05.508 { 00:09:05.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.508 "dma_device_type": 2 00:09:05.508 } 00:09:05.508 ], 00:09:05.508 "driver_specific": {} 00:09:05.508 } 00:09:05.508 ] 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.508 "name": "Existed_Raid", 00:09:05.508 "uuid": "dee42986-3597-4cd0-a8c5-d9315392531b", 00:09:05.508 "strip_size_kb": 64, 00:09:05.508 "state": "configuring", 00:09:05.508 "raid_level": "raid0", 00:09:05.508 "superblock": true, 00:09:05.508 "num_base_bdevs": 3, 00:09:05.508 "num_base_bdevs_discovered": 2, 00:09:05.508 "num_base_bdevs_operational": 3, 00:09:05.508 "base_bdevs_list": [ 00:09:05.508 { 00:09:05.508 "name": "BaseBdev1", 00:09:05.508 "uuid": "ad2f8dab-b8cf-4a28-9653-e4be932eef86", 00:09:05.508 "is_configured": true, 00:09:05.508 "data_offset": 2048, 00:09:05.508 "data_size": 63488 00:09:05.508 }, 00:09:05.508 { 00:09:05.508 "name": null, 00:09:05.508 "uuid": "a0b87574-f521-40e5-94dd-adb052919cac", 00:09:05.508 "is_configured": false, 00:09:05.508 "data_offset": 0, 00:09:05.508 "data_size": 63488 00:09:05.508 }, 00:09:05.508 { 00:09:05.508 "name": "BaseBdev3", 00:09:05.508 "uuid": "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1", 00:09:05.508 "is_configured": true, 00:09:05.508 "data_offset": 2048, 00:09:05.508 "data_size": 63488 00:09:05.508 } 00:09:05.508 ] 00:09:05.508 }' 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.508 09:53:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.077 [2024-10-21 09:53:42.457805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.077 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.078 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.078 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.078 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.078 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.078 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.078 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.078 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.078 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.078 "name": "Existed_Raid", 00:09:06.078 "uuid": "dee42986-3597-4cd0-a8c5-d9315392531b", 00:09:06.078 "strip_size_kb": 64, 00:09:06.078 "state": "configuring", 00:09:06.078 "raid_level": "raid0", 00:09:06.078 "superblock": true, 00:09:06.078 "num_base_bdevs": 3, 00:09:06.078 "num_base_bdevs_discovered": 1, 00:09:06.078 "num_base_bdevs_operational": 3, 00:09:06.078 "base_bdevs_list": [ 00:09:06.078 { 00:09:06.078 "name": "BaseBdev1", 00:09:06.078 "uuid": "ad2f8dab-b8cf-4a28-9653-e4be932eef86", 00:09:06.078 "is_configured": true, 00:09:06.078 "data_offset": 2048, 00:09:06.078 "data_size": 63488 00:09:06.078 }, 00:09:06.078 { 00:09:06.078 "name": null, 00:09:06.078 "uuid": "a0b87574-f521-40e5-94dd-adb052919cac", 00:09:06.078 "is_configured": false, 00:09:06.078 "data_offset": 0, 00:09:06.078 "data_size": 63488 00:09:06.078 }, 00:09:06.078 { 00:09:06.078 "name": null, 00:09:06.078 "uuid": "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1", 00:09:06.078 "is_configured": false, 00:09:06.078 "data_offset": 0, 00:09:06.078 "data_size": 63488 00:09:06.078 } 00:09:06.078 ] 00:09:06.078 }' 00:09:06.078 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.078 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.337 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.338 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.338 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.338 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.597 [2024-10-21 09:53:42.964982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.597 09:53:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.597 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.597 "name": "Existed_Raid", 00:09:06.597 "uuid": "dee42986-3597-4cd0-a8c5-d9315392531b", 00:09:06.597 "strip_size_kb": 64, 00:09:06.597 "state": "configuring", 00:09:06.597 "raid_level": "raid0", 00:09:06.597 "superblock": true, 00:09:06.597 "num_base_bdevs": 3, 00:09:06.597 "num_base_bdevs_discovered": 2, 00:09:06.597 "num_base_bdevs_operational": 3, 00:09:06.597 "base_bdevs_list": [ 00:09:06.597 { 00:09:06.597 "name": "BaseBdev1", 00:09:06.597 "uuid": "ad2f8dab-b8cf-4a28-9653-e4be932eef86", 00:09:06.597 "is_configured": true, 00:09:06.597 "data_offset": 2048, 00:09:06.597 "data_size": 63488 00:09:06.597 }, 00:09:06.597 { 00:09:06.597 "name": null, 00:09:06.597 "uuid": "a0b87574-f521-40e5-94dd-adb052919cac", 00:09:06.597 "is_configured": false, 00:09:06.597 "data_offset": 0, 00:09:06.597 "data_size": 63488 00:09:06.597 }, 00:09:06.597 { 00:09:06.597 "name": "BaseBdev3", 00:09:06.597 "uuid": "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1", 00:09:06.597 "is_configured": true, 00:09:06.597 "data_offset": 2048, 00:09:06.597 "data_size": 63488 00:09:06.597 } 00:09:06.597 ] 00:09:06.597 }' 00:09:06.597 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.597 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.856 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.856 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.856 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.856 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.856 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.856 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:06.856 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.856 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.856 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.116 [2024-10-21 09:53:43.452219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.116 "name": "Existed_Raid", 00:09:07.116 "uuid": "dee42986-3597-4cd0-a8c5-d9315392531b", 00:09:07.116 "strip_size_kb": 64, 00:09:07.116 "state": "configuring", 00:09:07.116 "raid_level": "raid0", 00:09:07.116 "superblock": true, 00:09:07.116 "num_base_bdevs": 3, 00:09:07.116 "num_base_bdevs_discovered": 1, 00:09:07.116 "num_base_bdevs_operational": 3, 00:09:07.116 "base_bdevs_list": [ 00:09:07.116 { 00:09:07.116 "name": null, 00:09:07.116 "uuid": "ad2f8dab-b8cf-4a28-9653-e4be932eef86", 00:09:07.116 "is_configured": false, 00:09:07.116 "data_offset": 0, 00:09:07.116 "data_size": 63488 00:09:07.116 }, 00:09:07.116 { 00:09:07.116 "name": null, 00:09:07.116 "uuid": "a0b87574-f521-40e5-94dd-adb052919cac", 00:09:07.116 "is_configured": false, 00:09:07.116 "data_offset": 0, 00:09:07.116 "data_size": 63488 00:09:07.116 }, 00:09:07.116 { 00:09:07.116 "name": "BaseBdev3", 00:09:07.116 "uuid": "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1", 00:09:07.116 "is_configured": true, 00:09:07.116 "data_offset": 2048, 00:09:07.116 "data_size": 63488 00:09:07.116 } 00:09:07.116 ] 00:09:07.116 }' 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.116 09:53:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.684 [2024-10-21 09:53:44.107938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.684 "name": "Existed_Raid", 00:09:07.684 "uuid": "dee42986-3597-4cd0-a8c5-d9315392531b", 00:09:07.684 "strip_size_kb": 64, 00:09:07.684 "state": "configuring", 00:09:07.684 "raid_level": "raid0", 00:09:07.684 "superblock": true, 00:09:07.684 "num_base_bdevs": 3, 00:09:07.684 "num_base_bdevs_discovered": 2, 00:09:07.684 "num_base_bdevs_operational": 3, 00:09:07.684 "base_bdevs_list": [ 00:09:07.684 { 00:09:07.684 "name": null, 00:09:07.684 "uuid": "ad2f8dab-b8cf-4a28-9653-e4be932eef86", 00:09:07.684 "is_configured": false, 00:09:07.684 "data_offset": 0, 00:09:07.684 "data_size": 63488 00:09:07.684 }, 00:09:07.684 { 00:09:07.684 "name": "BaseBdev2", 00:09:07.684 "uuid": "a0b87574-f521-40e5-94dd-adb052919cac", 00:09:07.684 "is_configured": true, 00:09:07.684 "data_offset": 2048, 00:09:07.684 "data_size": 63488 00:09:07.684 }, 00:09:07.684 { 00:09:07.684 "name": "BaseBdev3", 00:09:07.684 "uuid": "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1", 00:09:07.684 "is_configured": true, 00:09:07.684 "data_offset": 2048, 00:09:07.684 "data_size": 63488 00:09:07.684 } 00:09:07.684 ] 00:09:07.684 }' 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.684 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ad2f8dab-b8cf-4a28-9653-e4be932eef86 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.252 [2024-10-21 09:53:44.693992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:08.252 [2024-10-21 09:53:44.694341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:09:08.252 [2024-10-21 09:53:44.694395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:08.252 [2024-10-21 09:53:44.694738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:08.252 [2024-10-21 09:53:44.694939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:09:08.252 [2024-10-21 09:53:44.694981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:09:08.252 NewBaseBdev 00:09:08.252 [2024-10-21 09:53:44.695174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.252 [ 00:09:08.252 { 00:09:08.252 "name": "NewBaseBdev", 00:09:08.252 "aliases": [ 00:09:08.252 "ad2f8dab-b8cf-4a28-9653-e4be932eef86" 00:09:08.252 ], 00:09:08.252 "product_name": "Malloc disk", 00:09:08.252 "block_size": 512, 00:09:08.252 "num_blocks": 65536, 00:09:08.252 "uuid": "ad2f8dab-b8cf-4a28-9653-e4be932eef86", 00:09:08.252 "assigned_rate_limits": { 00:09:08.252 "rw_ios_per_sec": 0, 00:09:08.252 "rw_mbytes_per_sec": 0, 00:09:08.252 "r_mbytes_per_sec": 0, 00:09:08.252 "w_mbytes_per_sec": 0 00:09:08.252 }, 00:09:08.252 "claimed": true, 00:09:08.252 "claim_type": "exclusive_write", 00:09:08.252 "zoned": false, 00:09:08.252 "supported_io_types": { 00:09:08.252 "read": true, 00:09:08.252 "write": true, 00:09:08.252 "unmap": true, 00:09:08.252 "flush": true, 00:09:08.252 "reset": true, 00:09:08.252 "nvme_admin": false, 00:09:08.252 "nvme_io": false, 00:09:08.252 "nvme_io_md": false, 00:09:08.252 "write_zeroes": true, 00:09:08.252 "zcopy": true, 00:09:08.252 "get_zone_info": false, 00:09:08.252 "zone_management": false, 00:09:08.252 "zone_append": false, 00:09:08.252 "compare": false, 00:09:08.252 "compare_and_write": false, 00:09:08.252 "abort": true, 00:09:08.252 "seek_hole": false, 00:09:08.252 "seek_data": false, 00:09:08.252 "copy": true, 00:09:08.252 "nvme_iov_md": false 00:09:08.252 }, 00:09:08.252 "memory_domains": [ 00:09:08.252 { 00:09:08.252 "dma_device_id": "system", 00:09:08.252 "dma_device_type": 1 00:09:08.252 }, 00:09:08.252 { 00:09:08.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.252 "dma_device_type": 2 00:09:08.252 } 00:09:08.252 ], 00:09:08.252 "driver_specific": {} 00:09:08.252 } 00:09:08.252 ] 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.252 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.253 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.253 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.253 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.253 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.253 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.253 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.253 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.253 "name": "Existed_Raid", 00:09:08.253 "uuid": "dee42986-3597-4cd0-a8c5-d9315392531b", 00:09:08.253 "strip_size_kb": 64, 00:09:08.253 "state": "online", 00:09:08.253 "raid_level": "raid0", 00:09:08.253 "superblock": true, 00:09:08.253 "num_base_bdevs": 3, 00:09:08.253 "num_base_bdevs_discovered": 3, 00:09:08.253 "num_base_bdevs_operational": 3, 00:09:08.253 "base_bdevs_list": [ 00:09:08.253 { 00:09:08.253 "name": "NewBaseBdev", 00:09:08.253 "uuid": "ad2f8dab-b8cf-4a28-9653-e4be932eef86", 00:09:08.253 "is_configured": true, 00:09:08.253 "data_offset": 2048, 00:09:08.253 "data_size": 63488 00:09:08.253 }, 00:09:08.253 { 00:09:08.253 "name": "BaseBdev2", 00:09:08.253 "uuid": "a0b87574-f521-40e5-94dd-adb052919cac", 00:09:08.253 "is_configured": true, 00:09:08.253 "data_offset": 2048, 00:09:08.253 "data_size": 63488 00:09:08.253 }, 00:09:08.253 { 00:09:08.253 "name": "BaseBdev3", 00:09:08.253 "uuid": "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1", 00:09:08.253 "is_configured": true, 00:09:08.253 "data_offset": 2048, 00:09:08.253 "data_size": 63488 00:09:08.253 } 00:09:08.253 ] 00:09:08.253 }' 00:09:08.253 09:53:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.253 09:53:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.822 [2024-10-21 09:53:45.221464] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.822 "name": "Existed_Raid", 00:09:08.822 "aliases": [ 00:09:08.822 "dee42986-3597-4cd0-a8c5-d9315392531b" 00:09:08.822 ], 00:09:08.822 "product_name": "Raid Volume", 00:09:08.822 "block_size": 512, 00:09:08.822 "num_blocks": 190464, 00:09:08.822 "uuid": "dee42986-3597-4cd0-a8c5-d9315392531b", 00:09:08.822 "assigned_rate_limits": { 00:09:08.822 "rw_ios_per_sec": 0, 00:09:08.822 "rw_mbytes_per_sec": 0, 00:09:08.822 "r_mbytes_per_sec": 0, 00:09:08.822 "w_mbytes_per_sec": 0 00:09:08.822 }, 00:09:08.822 "claimed": false, 00:09:08.822 "zoned": false, 00:09:08.822 "supported_io_types": { 00:09:08.822 "read": true, 00:09:08.822 "write": true, 00:09:08.822 "unmap": true, 00:09:08.822 "flush": true, 00:09:08.822 "reset": true, 00:09:08.822 "nvme_admin": false, 00:09:08.822 "nvme_io": false, 00:09:08.822 "nvme_io_md": false, 00:09:08.822 "write_zeroes": true, 00:09:08.822 "zcopy": false, 00:09:08.822 "get_zone_info": false, 00:09:08.822 "zone_management": false, 00:09:08.822 "zone_append": false, 00:09:08.822 "compare": false, 00:09:08.822 "compare_and_write": false, 00:09:08.822 "abort": false, 00:09:08.822 "seek_hole": false, 00:09:08.822 "seek_data": false, 00:09:08.822 "copy": false, 00:09:08.822 "nvme_iov_md": false 00:09:08.822 }, 00:09:08.822 "memory_domains": [ 00:09:08.822 { 00:09:08.822 "dma_device_id": "system", 00:09:08.822 "dma_device_type": 1 00:09:08.822 }, 00:09:08.822 { 00:09:08.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.822 "dma_device_type": 2 00:09:08.822 }, 00:09:08.822 { 00:09:08.822 "dma_device_id": "system", 00:09:08.822 "dma_device_type": 1 00:09:08.822 }, 00:09:08.822 { 00:09:08.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.822 "dma_device_type": 2 00:09:08.822 }, 00:09:08.822 { 00:09:08.822 "dma_device_id": "system", 00:09:08.822 "dma_device_type": 1 00:09:08.822 }, 00:09:08.822 { 00:09:08.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.822 "dma_device_type": 2 00:09:08.822 } 00:09:08.822 ], 00:09:08.822 "driver_specific": { 00:09:08.822 "raid": { 00:09:08.822 "uuid": "dee42986-3597-4cd0-a8c5-d9315392531b", 00:09:08.822 "strip_size_kb": 64, 00:09:08.822 "state": "online", 00:09:08.822 "raid_level": "raid0", 00:09:08.822 "superblock": true, 00:09:08.822 "num_base_bdevs": 3, 00:09:08.822 "num_base_bdevs_discovered": 3, 00:09:08.822 "num_base_bdevs_operational": 3, 00:09:08.822 "base_bdevs_list": [ 00:09:08.822 { 00:09:08.822 "name": "NewBaseBdev", 00:09:08.822 "uuid": "ad2f8dab-b8cf-4a28-9653-e4be932eef86", 00:09:08.822 "is_configured": true, 00:09:08.822 "data_offset": 2048, 00:09:08.822 "data_size": 63488 00:09:08.822 }, 00:09:08.822 { 00:09:08.822 "name": "BaseBdev2", 00:09:08.822 "uuid": "a0b87574-f521-40e5-94dd-adb052919cac", 00:09:08.822 "is_configured": true, 00:09:08.822 "data_offset": 2048, 00:09:08.822 "data_size": 63488 00:09:08.822 }, 00:09:08.822 { 00:09:08.822 "name": "BaseBdev3", 00:09:08.822 "uuid": "b06b5fe3-8bf1-429a-8a52-6ca9147ae0f1", 00:09:08.822 "is_configured": true, 00:09:08.822 "data_offset": 2048, 00:09:08.822 "data_size": 63488 00:09:08.822 } 00:09:08.822 ] 00:09:08.822 } 00:09:08.822 } 00:09:08.822 }' 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:08.822 BaseBdev2 00:09:08.822 BaseBdev3' 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.822 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.081 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.081 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.081 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.081 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.081 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.081 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.081 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.081 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.081 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.082 [2024-10-21 09:53:45.524590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.082 [2024-10-21 09:53:45.524720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.082 [2024-10-21 09:53:45.524837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.082 [2024-10-21 09:53:45.524917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.082 [2024-10-21 09:53:45.525007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64023 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64023 ']' 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64023 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64023 00:09:09.082 killing process with pid 64023 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64023' 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64023 00:09:09.082 [2024-10-21 09:53:45.565294] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.082 09:53:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64023 00:09:09.340 [2024-10-21 09:53:45.895457] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:10.719 09:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:10.719 00:09:10.719 real 0m10.945s 00:09:10.719 user 0m17.214s 00:09:10.719 sys 0m1.939s 00:09:10.719 ************************************ 00:09:10.719 END TEST raid_state_function_test_sb 00:09:10.719 ************************************ 00:09:10.719 09:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.719 09:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.719 09:53:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:10.719 09:53:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:10.719 09:53:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.719 09:53:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.719 ************************************ 00:09:10.719 START TEST raid_superblock_test 00:09:10.719 ************************************ 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64644 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64644 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 64644 ']' 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.719 09:53:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.720 09:53:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.720 09:53:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.720 09:53:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.720 [2024-10-21 09:53:47.297968] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:10.720 [2024-10-21 09:53:47.298173] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64644 ] 00:09:10.979 [2024-10-21 09:53:47.458840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.238 [2024-10-21 09:53:47.608617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.498 [2024-10-21 09:53:47.854721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.498 [2024-10-21 09:53:47.854770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.758 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.758 malloc1 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.759 [2024-10-21 09:53:48.175039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:11.759 [2024-10-21 09:53:48.175204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.759 [2024-10-21 09:53:48.175248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:09:11.759 [2024-10-21 09:53:48.175279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.759 [2024-10-21 09:53:48.177679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.759 [2024-10-21 09:53:48.177771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:11.759 pt1 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.759 malloc2 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.759 [2024-10-21 09:53:48.241222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:11.759 [2024-10-21 09:53:48.241346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.759 [2024-10-21 09:53:48.241386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:09:11.759 [2024-10-21 09:53:48.241419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.759 [2024-10-21 09:53:48.243784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.759 [2024-10-21 09:53:48.243854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:11.759 pt2 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.759 malloc3 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.759 [2024-10-21 09:53:48.310460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:11.759 [2024-10-21 09:53:48.310600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.759 [2024-10-21 09:53:48.310641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:11.759 [2024-10-21 09:53:48.310673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.759 [2024-10-21 09:53:48.313016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.759 [2024-10-21 09:53:48.313087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:11.759 pt3 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.759 [2024-10-21 09:53:48.322500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:11.759 [2024-10-21 09:53:48.324533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:11.759 [2024-10-21 09:53:48.324631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:11.759 [2024-10-21 09:53:48.324783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:09:11.759 [2024-10-21 09:53:48.324798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:11.759 [2024-10-21 09:53:48.325051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:11.759 [2024-10-21 09:53:48.325232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:09:11.759 [2024-10-21 09:53:48.325242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:09:11.759 [2024-10-21 09:53:48.325379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.759 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.019 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.019 "name": "raid_bdev1", 00:09:12.019 "uuid": "bdca4d6b-d37b-4cb2-9177-f140a9962e9d", 00:09:12.019 "strip_size_kb": 64, 00:09:12.019 "state": "online", 00:09:12.019 "raid_level": "raid0", 00:09:12.019 "superblock": true, 00:09:12.019 "num_base_bdevs": 3, 00:09:12.019 "num_base_bdevs_discovered": 3, 00:09:12.019 "num_base_bdevs_operational": 3, 00:09:12.019 "base_bdevs_list": [ 00:09:12.019 { 00:09:12.019 "name": "pt1", 00:09:12.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.019 "is_configured": true, 00:09:12.019 "data_offset": 2048, 00:09:12.019 "data_size": 63488 00:09:12.019 }, 00:09:12.019 { 00:09:12.019 "name": "pt2", 00:09:12.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.019 "is_configured": true, 00:09:12.019 "data_offset": 2048, 00:09:12.019 "data_size": 63488 00:09:12.019 }, 00:09:12.019 { 00:09:12.019 "name": "pt3", 00:09:12.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.019 "is_configured": true, 00:09:12.019 "data_offset": 2048, 00:09:12.019 "data_size": 63488 00:09:12.019 } 00:09:12.019 ] 00:09:12.019 }' 00:09:12.019 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.019 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.279 [2024-10-21 09:53:48.694185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.279 "name": "raid_bdev1", 00:09:12.279 "aliases": [ 00:09:12.279 "bdca4d6b-d37b-4cb2-9177-f140a9962e9d" 00:09:12.279 ], 00:09:12.279 "product_name": "Raid Volume", 00:09:12.279 "block_size": 512, 00:09:12.279 "num_blocks": 190464, 00:09:12.279 "uuid": "bdca4d6b-d37b-4cb2-9177-f140a9962e9d", 00:09:12.279 "assigned_rate_limits": { 00:09:12.279 "rw_ios_per_sec": 0, 00:09:12.279 "rw_mbytes_per_sec": 0, 00:09:12.279 "r_mbytes_per_sec": 0, 00:09:12.279 "w_mbytes_per_sec": 0 00:09:12.279 }, 00:09:12.279 "claimed": false, 00:09:12.279 "zoned": false, 00:09:12.279 "supported_io_types": { 00:09:12.279 "read": true, 00:09:12.279 "write": true, 00:09:12.279 "unmap": true, 00:09:12.279 "flush": true, 00:09:12.279 "reset": true, 00:09:12.279 "nvme_admin": false, 00:09:12.279 "nvme_io": false, 00:09:12.279 "nvme_io_md": false, 00:09:12.279 "write_zeroes": true, 00:09:12.279 "zcopy": false, 00:09:12.279 "get_zone_info": false, 00:09:12.279 "zone_management": false, 00:09:12.279 "zone_append": false, 00:09:12.279 "compare": false, 00:09:12.279 "compare_and_write": false, 00:09:12.279 "abort": false, 00:09:12.279 "seek_hole": false, 00:09:12.279 "seek_data": false, 00:09:12.279 "copy": false, 00:09:12.279 "nvme_iov_md": false 00:09:12.279 }, 00:09:12.279 "memory_domains": [ 00:09:12.279 { 00:09:12.279 "dma_device_id": "system", 00:09:12.279 "dma_device_type": 1 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.279 "dma_device_type": 2 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "dma_device_id": "system", 00:09:12.279 "dma_device_type": 1 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.279 "dma_device_type": 2 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "dma_device_id": "system", 00:09:12.279 "dma_device_type": 1 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.279 "dma_device_type": 2 00:09:12.279 } 00:09:12.279 ], 00:09:12.279 "driver_specific": { 00:09:12.279 "raid": { 00:09:12.279 "uuid": "bdca4d6b-d37b-4cb2-9177-f140a9962e9d", 00:09:12.279 "strip_size_kb": 64, 00:09:12.279 "state": "online", 00:09:12.279 "raid_level": "raid0", 00:09:12.279 "superblock": true, 00:09:12.279 "num_base_bdevs": 3, 00:09:12.279 "num_base_bdevs_discovered": 3, 00:09:12.279 "num_base_bdevs_operational": 3, 00:09:12.279 "base_bdevs_list": [ 00:09:12.279 { 00:09:12.279 "name": "pt1", 00:09:12.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.279 "is_configured": true, 00:09:12.279 "data_offset": 2048, 00:09:12.279 "data_size": 63488 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "name": "pt2", 00:09:12.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.279 "is_configured": true, 00:09:12.279 "data_offset": 2048, 00:09:12.279 "data_size": 63488 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "name": "pt3", 00:09:12.279 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.279 "is_configured": true, 00:09:12.279 "data_offset": 2048, 00:09:12.279 "data_size": 63488 00:09:12.279 } 00:09:12.279 ] 00:09:12.279 } 00:09:12.279 } 00:09:12.279 }' 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:12.279 pt2 00:09:12.279 pt3' 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.279 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.280 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.280 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:12.280 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.280 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.280 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.280 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.280 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.280 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.280 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.280 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:12.539 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.539 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.540 [2024-10-21 09:53:48.973623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.540 09:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bdca4d6b-d37b-4cb2-9177-f140a9962e9d 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bdca4d6b-d37b-4cb2-9177-f140a9962e9d ']' 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.540 [2024-10-21 09:53:49.017256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.540 [2024-10-21 09:53:49.017292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.540 [2024-10-21 09:53:49.017377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.540 [2024-10-21 09:53:49.017450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.540 [2024-10-21 09:53:49.017460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.540 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.800 [2024-10-21 09:53:49.165028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:12.800 [2024-10-21 09:53:49.167195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:12.800 [2024-10-21 09:53:49.167243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:12.800 [2024-10-21 09:53:49.167294] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:12.800 [2024-10-21 09:53:49.167348] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:12.800 [2024-10-21 09:53:49.167367] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:12.800 [2024-10-21 09:53:49.167384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.800 [2024-10-21 09:53:49.167395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:09:12.800 request: 00:09:12.800 { 00:09:12.800 "name": "raid_bdev1", 00:09:12.800 "raid_level": "raid0", 00:09:12.800 "base_bdevs": [ 00:09:12.800 "malloc1", 00:09:12.800 "malloc2", 00:09:12.800 "malloc3" 00:09:12.800 ], 00:09:12.800 "strip_size_kb": 64, 00:09:12.800 "superblock": false, 00:09:12.800 "method": "bdev_raid_create", 00:09:12.800 "req_id": 1 00:09:12.800 } 00:09:12.800 Got JSON-RPC error response 00:09:12.800 response: 00:09:12.800 { 00:09:12.800 "code": -17, 00:09:12.800 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:12.800 } 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.800 [2024-10-21 09:53:49.232878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:12.800 [2024-10-21 09:53:49.233007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.800 [2024-10-21 09:53:49.233046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:12.800 [2024-10-21 09:53:49.233077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.800 [2024-10-21 09:53:49.235557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.800 [2024-10-21 09:53:49.235647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:12.800 [2024-10-21 09:53:49.235750] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:12.800 [2024-10-21 09:53:49.235832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:12.800 pt1 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.800 "name": "raid_bdev1", 00:09:12.800 "uuid": "bdca4d6b-d37b-4cb2-9177-f140a9962e9d", 00:09:12.800 "strip_size_kb": 64, 00:09:12.800 "state": "configuring", 00:09:12.800 "raid_level": "raid0", 00:09:12.800 "superblock": true, 00:09:12.800 "num_base_bdevs": 3, 00:09:12.800 "num_base_bdevs_discovered": 1, 00:09:12.800 "num_base_bdevs_operational": 3, 00:09:12.800 "base_bdevs_list": [ 00:09:12.800 { 00:09:12.800 "name": "pt1", 00:09:12.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.800 "is_configured": true, 00:09:12.800 "data_offset": 2048, 00:09:12.800 "data_size": 63488 00:09:12.800 }, 00:09:12.800 { 00:09:12.800 "name": null, 00:09:12.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.800 "is_configured": false, 00:09:12.800 "data_offset": 2048, 00:09:12.800 "data_size": 63488 00:09:12.800 }, 00:09:12.800 { 00:09:12.800 "name": null, 00:09:12.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.800 "is_configured": false, 00:09:12.800 "data_offset": 2048, 00:09:12.800 "data_size": 63488 00:09:12.800 } 00:09:12.800 ] 00:09:12.800 }' 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.800 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.370 [2024-10-21 09:53:49.716157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:13.370 [2024-10-21 09:53:49.716253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.370 [2024-10-21 09:53:49.716279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:13.370 [2024-10-21 09:53:49.716290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.370 [2024-10-21 09:53:49.716836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.370 [2024-10-21 09:53:49.716857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:13.370 [2024-10-21 09:53:49.716962] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:13.370 [2024-10-21 09:53:49.716985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:13.370 pt2 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.370 [2024-10-21 09:53:49.724118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.370 "name": "raid_bdev1", 00:09:13.370 "uuid": "bdca4d6b-d37b-4cb2-9177-f140a9962e9d", 00:09:13.370 "strip_size_kb": 64, 00:09:13.370 "state": "configuring", 00:09:13.370 "raid_level": "raid0", 00:09:13.370 "superblock": true, 00:09:13.370 "num_base_bdevs": 3, 00:09:13.370 "num_base_bdevs_discovered": 1, 00:09:13.370 "num_base_bdevs_operational": 3, 00:09:13.370 "base_bdevs_list": [ 00:09:13.370 { 00:09:13.370 "name": "pt1", 00:09:13.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.370 "is_configured": true, 00:09:13.370 "data_offset": 2048, 00:09:13.370 "data_size": 63488 00:09:13.370 }, 00:09:13.370 { 00:09:13.370 "name": null, 00:09:13.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.370 "is_configured": false, 00:09:13.370 "data_offset": 0, 00:09:13.370 "data_size": 63488 00:09:13.370 }, 00:09:13.370 { 00:09:13.370 "name": null, 00:09:13.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.370 "is_configured": false, 00:09:13.370 "data_offset": 2048, 00:09:13.370 "data_size": 63488 00:09:13.370 } 00:09:13.370 ] 00:09:13.370 }' 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.370 09:53:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.631 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:13.631 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:13.631 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.632 [2024-10-21 09:53:50.151376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:13.632 [2024-10-21 09:53:50.151586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.632 [2024-10-21 09:53:50.151630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:13.632 [2024-10-21 09:53:50.151670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.632 [2024-10-21 09:53:50.152248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.632 [2024-10-21 09:53:50.152313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:13.632 [2024-10-21 09:53:50.152450] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:13.632 [2024-10-21 09:53:50.152507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:13.632 pt2 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.632 [2024-10-21 09:53:50.163331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:13.632 [2024-10-21 09:53:50.163428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.632 [2024-10-21 09:53:50.163459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:13.632 [2024-10-21 09:53:50.163489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.632 [2024-10-21 09:53:50.163917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.632 [2024-10-21 09:53:50.163978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:13.632 [2024-10-21 09:53:50.164067] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:13.632 [2024-10-21 09:53:50.164128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:13.632 [2024-10-21 09:53:50.164279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:13.632 [2024-10-21 09:53:50.164321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:13.632 [2024-10-21 09:53:50.164609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:13.632 [2024-10-21 09:53:50.164750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:13.632 [2024-10-21 09:53:50.164759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:13.632 [2024-10-21 09:53:50.164901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.632 pt3 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.632 "name": "raid_bdev1", 00:09:13.632 "uuid": "bdca4d6b-d37b-4cb2-9177-f140a9962e9d", 00:09:13.632 "strip_size_kb": 64, 00:09:13.632 "state": "online", 00:09:13.632 "raid_level": "raid0", 00:09:13.632 "superblock": true, 00:09:13.632 "num_base_bdevs": 3, 00:09:13.632 "num_base_bdevs_discovered": 3, 00:09:13.632 "num_base_bdevs_operational": 3, 00:09:13.632 "base_bdevs_list": [ 00:09:13.632 { 00:09:13.632 "name": "pt1", 00:09:13.632 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.632 "is_configured": true, 00:09:13.632 "data_offset": 2048, 00:09:13.632 "data_size": 63488 00:09:13.632 }, 00:09:13.632 { 00:09:13.632 "name": "pt2", 00:09:13.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.632 "is_configured": true, 00:09:13.632 "data_offset": 2048, 00:09:13.632 "data_size": 63488 00:09:13.632 }, 00:09:13.632 { 00:09:13.632 "name": "pt3", 00:09:13.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.632 "is_configured": true, 00:09:13.632 "data_offset": 2048, 00:09:13.632 "data_size": 63488 00:09:13.632 } 00:09:13.632 ] 00:09:13.632 }' 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.632 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.222 [2024-10-21 09:53:50.614928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.222 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.222 "name": "raid_bdev1", 00:09:14.222 "aliases": [ 00:09:14.222 "bdca4d6b-d37b-4cb2-9177-f140a9962e9d" 00:09:14.222 ], 00:09:14.222 "product_name": "Raid Volume", 00:09:14.222 "block_size": 512, 00:09:14.222 "num_blocks": 190464, 00:09:14.222 "uuid": "bdca4d6b-d37b-4cb2-9177-f140a9962e9d", 00:09:14.222 "assigned_rate_limits": { 00:09:14.222 "rw_ios_per_sec": 0, 00:09:14.222 "rw_mbytes_per_sec": 0, 00:09:14.222 "r_mbytes_per_sec": 0, 00:09:14.222 "w_mbytes_per_sec": 0 00:09:14.222 }, 00:09:14.222 "claimed": false, 00:09:14.222 "zoned": false, 00:09:14.222 "supported_io_types": { 00:09:14.222 "read": true, 00:09:14.222 "write": true, 00:09:14.222 "unmap": true, 00:09:14.222 "flush": true, 00:09:14.222 "reset": true, 00:09:14.222 "nvme_admin": false, 00:09:14.222 "nvme_io": false, 00:09:14.222 "nvme_io_md": false, 00:09:14.222 "write_zeroes": true, 00:09:14.222 "zcopy": false, 00:09:14.222 "get_zone_info": false, 00:09:14.222 "zone_management": false, 00:09:14.222 "zone_append": false, 00:09:14.222 "compare": false, 00:09:14.222 "compare_and_write": false, 00:09:14.222 "abort": false, 00:09:14.222 "seek_hole": false, 00:09:14.222 "seek_data": false, 00:09:14.222 "copy": false, 00:09:14.222 "nvme_iov_md": false 00:09:14.222 }, 00:09:14.222 "memory_domains": [ 00:09:14.222 { 00:09:14.222 "dma_device_id": "system", 00:09:14.222 "dma_device_type": 1 00:09:14.222 }, 00:09:14.222 { 00:09:14.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.222 "dma_device_type": 2 00:09:14.222 }, 00:09:14.222 { 00:09:14.222 "dma_device_id": "system", 00:09:14.222 "dma_device_type": 1 00:09:14.222 }, 00:09:14.222 { 00:09:14.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.222 "dma_device_type": 2 00:09:14.222 }, 00:09:14.222 { 00:09:14.222 "dma_device_id": "system", 00:09:14.222 "dma_device_type": 1 00:09:14.222 }, 00:09:14.222 { 00:09:14.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.222 "dma_device_type": 2 00:09:14.222 } 00:09:14.222 ], 00:09:14.222 "driver_specific": { 00:09:14.222 "raid": { 00:09:14.222 "uuid": "bdca4d6b-d37b-4cb2-9177-f140a9962e9d", 00:09:14.222 "strip_size_kb": 64, 00:09:14.222 "state": "online", 00:09:14.222 "raid_level": "raid0", 00:09:14.222 "superblock": true, 00:09:14.222 "num_base_bdevs": 3, 00:09:14.222 "num_base_bdevs_discovered": 3, 00:09:14.222 "num_base_bdevs_operational": 3, 00:09:14.222 "base_bdevs_list": [ 00:09:14.222 { 00:09:14.222 "name": "pt1", 00:09:14.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.222 "is_configured": true, 00:09:14.222 "data_offset": 2048, 00:09:14.222 "data_size": 63488 00:09:14.222 }, 00:09:14.222 { 00:09:14.222 "name": "pt2", 00:09:14.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.223 "is_configured": true, 00:09:14.223 "data_offset": 2048, 00:09:14.223 "data_size": 63488 00:09:14.223 }, 00:09:14.223 { 00:09:14.223 "name": "pt3", 00:09:14.223 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.223 "is_configured": true, 00:09:14.223 "data_offset": 2048, 00:09:14.223 "data_size": 63488 00:09:14.223 } 00:09:14.223 ] 00:09:14.223 } 00:09:14.223 } 00:09:14.223 }' 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:14.223 pt2 00:09:14.223 pt3' 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.223 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.482 [2024-10-21 09:53:50.858360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bdca4d6b-d37b-4cb2-9177-f140a9962e9d '!=' bdca4d6b-d37b-4cb2-9177-f140a9962e9d ']' 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64644 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 64644 ']' 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 64644 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64644 00:09:14.482 killing process with pid 64644 00:09:14.482 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.483 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.483 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64644' 00:09:14.483 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 64644 00:09:14.483 [2024-10-21 09:53:50.937066] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.483 [2024-10-21 09:53:50.937176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.483 09:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 64644 00:09:14.483 [2024-10-21 09:53:50.937242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.483 [2024-10-21 09:53:50.937256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:14.742 [2024-10-21 09:53:51.263798] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.120 09:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:16.120 00:09:16.120 real 0m5.275s 00:09:16.120 user 0m7.416s 00:09:16.120 sys 0m0.928s 00:09:16.120 ************************************ 00:09:16.120 END TEST raid_superblock_test 00:09:16.120 ************************************ 00:09:16.120 09:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.120 09:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.120 09:53:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:16.120 09:53:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:16.120 09:53:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.120 09:53:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.120 ************************************ 00:09:16.120 START TEST raid_read_error_test 00:09:16.120 ************************************ 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.88jp5HSH9W 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64902 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64902 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 64902 ']' 00:09:16.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.120 09:53:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.120 [2024-10-21 09:53:52.656354] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:16.120 [2024-10-21 09:53:52.656474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64902 ] 00:09:16.379 [2024-10-21 09:53:52.818657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.379 [2024-10-21 09:53:52.961361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.637 [2024-10-21 09:53:53.220775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.637 [2024-10-21 09:53:53.220824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.896 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.896 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:16.896 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:16.896 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:16.896 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.896 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.156 BaseBdev1_malloc 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.156 true 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.156 [2024-10-21 09:53:53.554487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:17.156 [2024-10-21 09:53:53.554557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.156 [2024-10-21 09:53:53.554588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:17.156 [2024-10-21 09:53:53.554603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.156 [2024-10-21 09:53:53.556925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.156 [2024-10-21 09:53:53.556962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:17.156 BaseBdev1 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.156 BaseBdev2_malloc 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.156 true 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.156 [2024-10-21 09:53:53.626361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.156 [2024-10-21 09:53:53.626415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.156 [2024-10-21 09:53:53.626438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:17.156 [2024-10-21 09:53:53.626449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.156 [2024-10-21 09:53:53.628741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.156 [2024-10-21 09:53:53.628777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.156 BaseBdev2 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.156 BaseBdev3_malloc 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.156 true 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.156 [2024-10-21 09:53:53.709917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:17.156 [2024-10-21 09:53:53.709977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.156 [2024-10-21 09:53:53.709995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:17.156 [2024-10-21 09:53:53.710007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.156 [2024-10-21 09:53:53.712344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.156 [2024-10-21 09:53:53.712459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:17.156 BaseBdev3 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.156 [2024-10-21 09:53:53.721970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.156 [2024-10-21 09:53:53.724019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.156 [2024-10-21 09:53:53.724096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.156 [2024-10-21 09:53:53.724289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:09:17.156 [2024-10-21 09:53:53.724301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:17.156 [2024-10-21 09:53:53.724541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:17.156 [2024-10-21 09:53:53.724736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:09:17.156 [2024-10-21 09:53:53.724750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:09:17.156 [2024-10-21 09:53:53.724896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.156 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.157 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.157 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.157 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.157 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.416 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.416 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.416 "name": "raid_bdev1", 00:09:17.416 "uuid": "738e7d8f-31aa-4558-bd57-c7cf04f57839", 00:09:17.416 "strip_size_kb": 64, 00:09:17.416 "state": "online", 00:09:17.416 "raid_level": "raid0", 00:09:17.416 "superblock": true, 00:09:17.416 "num_base_bdevs": 3, 00:09:17.416 "num_base_bdevs_discovered": 3, 00:09:17.416 "num_base_bdevs_operational": 3, 00:09:17.416 "base_bdevs_list": [ 00:09:17.416 { 00:09:17.416 "name": "BaseBdev1", 00:09:17.416 "uuid": "bb481885-4bf4-5b0f-a1ba-42c48c83d2f7", 00:09:17.416 "is_configured": true, 00:09:17.416 "data_offset": 2048, 00:09:17.416 "data_size": 63488 00:09:17.416 }, 00:09:17.416 { 00:09:17.416 "name": "BaseBdev2", 00:09:17.416 "uuid": "071be180-4208-5a02-b276-94984390c2fc", 00:09:17.416 "is_configured": true, 00:09:17.416 "data_offset": 2048, 00:09:17.416 "data_size": 63488 00:09:17.416 }, 00:09:17.416 { 00:09:17.416 "name": "BaseBdev3", 00:09:17.416 "uuid": "3bf2b08b-26ac-5353-85e8-1abd0f85fadd", 00:09:17.416 "is_configured": true, 00:09:17.416 "data_offset": 2048, 00:09:17.416 "data_size": 63488 00:09:17.416 } 00:09:17.416 ] 00:09:17.416 }' 00:09:17.416 09:53:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.416 09:53:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.675 09:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:17.675 09:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:17.675 [2024-10-21 09:53:54.194489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.609 "name": "raid_bdev1", 00:09:18.609 "uuid": "738e7d8f-31aa-4558-bd57-c7cf04f57839", 00:09:18.609 "strip_size_kb": 64, 00:09:18.609 "state": "online", 00:09:18.609 "raid_level": "raid0", 00:09:18.609 "superblock": true, 00:09:18.609 "num_base_bdevs": 3, 00:09:18.609 "num_base_bdevs_discovered": 3, 00:09:18.609 "num_base_bdevs_operational": 3, 00:09:18.609 "base_bdevs_list": [ 00:09:18.609 { 00:09:18.609 "name": "BaseBdev1", 00:09:18.609 "uuid": "bb481885-4bf4-5b0f-a1ba-42c48c83d2f7", 00:09:18.609 "is_configured": true, 00:09:18.609 "data_offset": 2048, 00:09:18.609 "data_size": 63488 00:09:18.609 }, 00:09:18.609 { 00:09:18.609 "name": "BaseBdev2", 00:09:18.609 "uuid": "071be180-4208-5a02-b276-94984390c2fc", 00:09:18.609 "is_configured": true, 00:09:18.609 "data_offset": 2048, 00:09:18.609 "data_size": 63488 00:09:18.609 }, 00:09:18.609 { 00:09:18.609 "name": "BaseBdev3", 00:09:18.609 "uuid": "3bf2b08b-26ac-5353-85e8-1abd0f85fadd", 00:09:18.609 "is_configured": true, 00:09:18.609 "data_offset": 2048, 00:09:18.609 "data_size": 63488 00:09:18.609 } 00:09:18.609 ] 00:09:18.609 }' 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.609 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.177 [2024-10-21 09:53:55.582822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.177 [2024-10-21 09:53:55.582967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.177 [2024-10-21 09:53:55.585471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.177 [2024-10-21 09:53:55.585560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.177 [2024-10-21 09:53:55.585634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.177 [2024-10-21 09:53:55.585679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:09:19.177 { 00:09:19.177 "results": [ 00:09:19.177 { 00:09:19.177 "job": "raid_bdev1", 00:09:19.177 "core_mask": "0x1", 00:09:19.177 "workload": "randrw", 00:09:19.177 "percentage": 50, 00:09:19.177 "status": "finished", 00:09:19.177 "queue_depth": 1, 00:09:19.177 "io_size": 131072, 00:09:19.177 "runtime": 1.389165, 00:09:19.177 "iops": 14415.854128199315, 00:09:19.177 "mibps": 1801.9817660249144, 00:09:19.177 "io_failed": 1, 00:09:19.177 "io_timeout": 0, 00:09:19.177 "avg_latency_us": 97.64057997685657, 00:09:19.177 "min_latency_us": 26.270742358078603, 00:09:19.177 "max_latency_us": 1380.8349344978167 00:09:19.177 } 00:09:19.177 ], 00:09:19.177 "core_count": 1 00:09:19.177 } 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64902 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 64902 ']' 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 64902 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64902 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64902' 00:09:19.177 killing process with pid 64902 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 64902 00:09:19.177 [2024-10-21 09:53:55.630980] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.177 09:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 64902 00:09:19.436 [2024-10-21 09:53:55.878781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.814 09:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.88jp5HSH9W 00:09:20.814 09:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:20.814 09:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:20.814 09:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:20.814 09:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:20.814 ************************************ 00:09:20.814 END TEST raid_read_error_test 00:09:20.814 ************************************ 00:09:20.814 09:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.814 09:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.814 09:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:20.814 00:09:20.814 real 0m4.610s 00:09:20.814 user 0m5.313s 00:09:20.814 sys 0m0.637s 00:09:20.814 09:53:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.814 09:53:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.814 09:53:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:20.814 09:53:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:20.814 09:53:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.814 09:53:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.814 ************************************ 00:09:20.814 START TEST raid_write_error_test 00:09:20.814 ************************************ 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SOJMifANws 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65043 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65043 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65043 ']' 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.814 09:53:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.814 [2024-10-21 09:53:57.332522] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:20.814 [2024-10-21 09:53:57.332741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65043 ] 00:09:21.074 [2024-10-21 09:53:57.493424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.074 [2024-10-21 09:53:57.641557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.334 [2024-10-21 09:53:57.901435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.334 [2024-10-21 09:53:57.901489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.593 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.593 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:21.593 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.593 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:21.593 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.593 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 BaseBdev1_malloc 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 true 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 [2024-10-21 09:53:58.222072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:21.854 [2024-10-21 09:53:58.222216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.854 [2024-10-21 09:53:58.222240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:21.854 [2024-10-21 09:53:58.222256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.854 [2024-10-21 09:53:58.224649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.854 [2024-10-21 09:53:58.224687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:21.854 BaseBdev1 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 BaseBdev2_malloc 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 true 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 [2024-10-21 09:53:58.298821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:21.854 [2024-10-21 09:53:58.298884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.854 [2024-10-21 09:53:58.298900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:21.854 [2024-10-21 09:53:58.298911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.854 [2024-10-21 09:53:58.301216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.854 [2024-10-21 09:53:58.301252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:21.854 BaseBdev2 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 BaseBdev3_malloc 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 true 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 [2024-10-21 09:53:58.392011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:21.854 [2024-10-21 09:53:58.392078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.854 [2024-10-21 09:53:58.392098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:21.854 [2024-10-21 09:53:58.392110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.854 [2024-10-21 09:53:58.394456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.854 [2024-10-21 09:53:58.394497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:21.854 BaseBdev3 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 [2024-10-21 09:53:58.404069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.854 [2024-10-21 09:53:58.406178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.854 [2024-10-21 09:53:58.406257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.854 [2024-10-21 09:53:58.406459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:09:21.854 [2024-10-21 09:53:58.406472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.854 [2024-10-21 09:53:58.406736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:21.854 [2024-10-21 09:53:58.406910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:09:21.854 [2024-10-21 09:53:58.406932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:09:21.854 [2024-10-21 09:53:58.407077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.854 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.114 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.114 "name": "raid_bdev1", 00:09:22.114 "uuid": "cd53ea15-fd28-434b-a71b-c27b57f63554", 00:09:22.114 "strip_size_kb": 64, 00:09:22.114 "state": "online", 00:09:22.114 "raid_level": "raid0", 00:09:22.114 "superblock": true, 00:09:22.114 "num_base_bdevs": 3, 00:09:22.114 "num_base_bdevs_discovered": 3, 00:09:22.114 "num_base_bdevs_operational": 3, 00:09:22.114 "base_bdevs_list": [ 00:09:22.114 { 00:09:22.114 "name": "BaseBdev1", 00:09:22.114 "uuid": "832b45c9-c4c0-5bb7-a340-84a890dc8a26", 00:09:22.114 "is_configured": true, 00:09:22.114 "data_offset": 2048, 00:09:22.114 "data_size": 63488 00:09:22.114 }, 00:09:22.114 { 00:09:22.114 "name": "BaseBdev2", 00:09:22.114 "uuid": "67eeede2-b645-5e1d-b035-caed1b795247", 00:09:22.114 "is_configured": true, 00:09:22.114 "data_offset": 2048, 00:09:22.114 "data_size": 63488 00:09:22.114 }, 00:09:22.114 { 00:09:22.114 "name": "BaseBdev3", 00:09:22.114 "uuid": "af86724b-a09f-5849-a51a-bbbb423b1aa3", 00:09:22.114 "is_configured": true, 00:09:22.114 "data_offset": 2048, 00:09:22.114 "data_size": 63488 00:09:22.114 } 00:09:22.114 ] 00:09:22.114 }' 00:09:22.114 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.114 09:53:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.380 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:22.380 09:53:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:22.380 [2024-10-21 09:53:58.924692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.318 "name": "raid_bdev1", 00:09:23.318 "uuid": "cd53ea15-fd28-434b-a71b-c27b57f63554", 00:09:23.318 "strip_size_kb": 64, 00:09:23.318 "state": "online", 00:09:23.318 "raid_level": "raid0", 00:09:23.318 "superblock": true, 00:09:23.318 "num_base_bdevs": 3, 00:09:23.318 "num_base_bdevs_discovered": 3, 00:09:23.318 "num_base_bdevs_operational": 3, 00:09:23.318 "base_bdevs_list": [ 00:09:23.318 { 00:09:23.318 "name": "BaseBdev1", 00:09:23.318 "uuid": "832b45c9-c4c0-5bb7-a340-84a890dc8a26", 00:09:23.318 "is_configured": true, 00:09:23.318 "data_offset": 2048, 00:09:23.318 "data_size": 63488 00:09:23.318 }, 00:09:23.318 { 00:09:23.318 "name": "BaseBdev2", 00:09:23.318 "uuid": "67eeede2-b645-5e1d-b035-caed1b795247", 00:09:23.318 "is_configured": true, 00:09:23.318 "data_offset": 2048, 00:09:23.318 "data_size": 63488 00:09:23.318 }, 00:09:23.318 { 00:09:23.318 "name": "BaseBdev3", 00:09:23.318 "uuid": "af86724b-a09f-5849-a51a-bbbb423b1aa3", 00:09:23.318 "is_configured": true, 00:09:23.318 "data_offset": 2048, 00:09:23.318 "data_size": 63488 00:09:23.318 } 00:09:23.318 ] 00:09:23.318 }' 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.318 09:53:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.885 [2024-10-21 09:54:00.305336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.885 [2024-10-21 09:54:00.305490] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.885 [2024-10-21 09:54:00.308007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.885 [2024-10-21 09:54:00.308096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.885 [2024-10-21 09:54:00.308159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.885 [2024-10-21 09:54:00.308199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:09:23.885 { 00:09:23.885 "results": [ 00:09:23.885 { 00:09:23.885 "job": "raid_bdev1", 00:09:23.885 "core_mask": "0x1", 00:09:23.885 "workload": "randrw", 00:09:23.885 "percentage": 50, 00:09:23.885 "status": "finished", 00:09:23.885 "queue_depth": 1, 00:09:23.885 "io_size": 131072, 00:09:23.885 "runtime": 1.381318, 00:09:23.885 "iops": 14282.011817698749, 00:09:23.885 "mibps": 1785.2514772123436, 00:09:23.885 "io_failed": 1, 00:09:23.885 "io_timeout": 0, 00:09:23.885 "avg_latency_us": 98.67924242481254, 00:09:23.885 "min_latency_us": 24.705676855895195, 00:09:23.885 "max_latency_us": 1366.5257641921398 00:09:23.885 } 00:09:23.885 ], 00:09:23.885 "core_count": 1 00:09:23.885 } 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65043 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65043 ']' 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65043 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65043 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.885 killing process with pid 65043 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65043' 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65043 00:09:23.885 [2024-10-21 09:54:00.345854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.885 09:54:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65043 00:09:24.144 [2024-10-21 09:54:00.594607] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.523 09:54:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:25.523 09:54:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SOJMifANws 00:09:25.523 09:54:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:25.523 09:54:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:25.523 09:54:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:25.523 09:54:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.523 09:54:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.523 ************************************ 00:09:25.523 END TEST raid_write_error_test 00:09:25.523 ************************************ 00:09:25.523 09:54:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:25.523 00:09:25.523 real 0m4.649s 00:09:25.523 user 0m5.375s 00:09:25.523 sys 0m0.628s 00:09:25.523 09:54:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.523 09:54:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.523 09:54:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:25.523 09:54:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:25.523 09:54:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:25.523 09:54:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.523 09:54:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.523 ************************************ 00:09:25.523 START TEST raid_state_function_test 00:09:25.523 ************************************ 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65192 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:25.523 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65192' 00:09:25.524 Process raid pid: 65192 00:09:25.524 09:54:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65192 00:09:25.524 09:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65192 ']' 00:09:25.524 09:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.524 09:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.524 09:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.524 09:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.524 09:54:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.524 [2024-10-21 09:54:02.044896] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:25.524 [2024-10-21 09:54:02.045090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.783 [2024-10-21 09:54:02.208590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.783 [2024-10-21 09:54:02.350801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.043 [2024-10-21 09:54:02.606230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.043 [2024-10-21 09:54:02.606396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.302 [2024-10-21 09:54:02.875389] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.302 [2024-10-21 09:54:02.875465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.302 [2024-10-21 09:54:02.875476] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.302 [2024-10-21 09:54:02.875488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.302 [2024-10-21 09:54:02.875495] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.302 [2024-10-21 09:54:02.875505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.302 09:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.561 09:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.561 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.561 "name": "Existed_Raid", 00:09:26.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.561 "strip_size_kb": 64, 00:09:26.561 "state": "configuring", 00:09:26.561 "raid_level": "concat", 00:09:26.561 "superblock": false, 00:09:26.561 "num_base_bdevs": 3, 00:09:26.561 "num_base_bdevs_discovered": 0, 00:09:26.561 "num_base_bdevs_operational": 3, 00:09:26.561 "base_bdevs_list": [ 00:09:26.561 { 00:09:26.561 "name": "BaseBdev1", 00:09:26.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.561 "is_configured": false, 00:09:26.561 "data_offset": 0, 00:09:26.561 "data_size": 0 00:09:26.561 }, 00:09:26.561 { 00:09:26.561 "name": "BaseBdev2", 00:09:26.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.561 "is_configured": false, 00:09:26.561 "data_offset": 0, 00:09:26.561 "data_size": 0 00:09:26.561 }, 00:09:26.561 { 00:09:26.561 "name": "BaseBdev3", 00:09:26.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.561 "is_configured": false, 00:09:26.561 "data_offset": 0, 00:09:26.561 "data_size": 0 00:09:26.561 } 00:09:26.561 ] 00:09:26.562 }' 00:09:26.562 09:54:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.562 09:54:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.821 [2024-10-21 09:54:03.314655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.821 [2024-10-21 09:54:03.314807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.821 [2024-10-21 09:54:03.326650] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.821 [2024-10-21 09:54:03.326742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.821 [2024-10-21 09:54:03.326772] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.821 [2024-10-21 09:54:03.326808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.821 [2024-10-21 09:54:03.326831] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.821 [2024-10-21 09:54:03.326854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.821 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.821 [2024-10-21 09:54:03.382559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.821 BaseBdev1 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.822 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.822 [ 00:09:26.822 { 00:09:26.822 "name": "BaseBdev1", 00:09:26.822 "aliases": [ 00:09:26.822 "e0a88704-6450-487f-ac7a-7c5b0584fcd1" 00:09:26.822 ], 00:09:26.822 "product_name": "Malloc disk", 00:09:26.822 "block_size": 512, 00:09:26.822 "num_blocks": 65536, 00:09:26.822 "uuid": "e0a88704-6450-487f-ac7a-7c5b0584fcd1", 00:09:26.822 "assigned_rate_limits": { 00:09:26.822 "rw_ios_per_sec": 0, 00:09:26.822 "rw_mbytes_per_sec": 0, 00:09:26.822 "r_mbytes_per_sec": 0, 00:09:26.822 "w_mbytes_per_sec": 0 00:09:26.822 }, 00:09:26.822 "claimed": true, 00:09:26.822 "claim_type": "exclusive_write", 00:09:26.822 "zoned": false, 00:09:26.822 "supported_io_types": { 00:09:26.822 "read": true, 00:09:26.822 "write": true, 00:09:26.822 "unmap": true, 00:09:26.822 "flush": true, 00:09:26.822 "reset": true, 00:09:26.822 "nvme_admin": false, 00:09:26.822 "nvme_io": false, 00:09:26.822 "nvme_io_md": false, 00:09:26.822 "write_zeroes": true, 00:09:26.822 "zcopy": true, 00:09:26.822 "get_zone_info": false, 00:09:26.822 "zone_management": false, 00:09:26.822 "zone_append": false, 00:09:27.082 "compare": false, 00:09:27.082 "compare_and_write": false, 00:09:27.082 "abort": true, 00:09:27.082 "seek_hole": false, 00:09:27.082 "seek_data": false, 00:09:27.082 "copy": true, 00:09:27.082 "nvme_iov_md": false 00:09:27.082 }, 00:09:27.082 "memory_domains": [ 00:09:27.082 { 00:09:27.082 "dma_device_id": "system", 00:09:27.082 "dma_device_type": 1 00:09:27.082 }, 00:09:27.082 { 00:09:27.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.082 "dma_device_type": 2 00:09:27.082 } 00:09:27.082 ], 00:09:27.082 "driver_specific": {} 00:09:27.082 } 00:09:27.082 ] 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.082 "name": "Existed_Raid", 00:09:27.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.082 "strip_size_kb": 64, 00:09:27.082 "state": "configuring", 00:09:27.082 "raid_level": "concat", 00:09:27.082 "superblock": false, 00:09:27.082 "num_base_bdevs": 3, 00:09:27.082 "num_base_bdevs_discovered": 1, 00:09:27.082 "num_base_bdevs_operational": 3, 00:09:27.082 "base_bdevs_list": [ 00:09:27.082 { 00:09:27.082 "name": "BaseBdev1", 00:09:27.082 "uuid": "e0a88704-6450-487f-ac7a-7c5b0584fcd1", 00:09:27.082 "is_configured": true, 00:09:27.082 "data_offset": 0, 00:09:27.082 "data_size": 65536 00:09:27.082 }, 00:09:27.082 { 00:09:27.082 "name": "BaseBdev2", 00:09:27.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.082 "is_configured": false, 00:09:27.082 "data_offset": 0, 00:09:27.082 "data_size": 0 00:09:27.082 }, 00:09:27.082 { 00:09:27.082 "name": "BaseBdev3", 00:09:27.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.082 "is_configured": false, 00:09:27.082 "data_offset": 0, 00:09:27.082 "data_size": 0 00:09:27.082 } 00:09:27.082 ] 00:09:27.082 }' 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.082 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.342 [2024-10-21 09:54:03.841808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.342 [2024-10-21 09:54:03.841985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.342 [2024-10-21 09:54:03.853811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.342 [2024-10-21 09:54:03.855935] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.342 [2024-10-21 09:54:03.856020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.342 [2024-10-21 09:54:03.856049] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.342 [2024-10-21 09:54:03.856071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.342 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.342 "name": "Existed_Raid", 00:09:27.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.342 "strip_size_kb": 64, 00:09:27.342 "state": "configuring", 00:09:27.342 "raid_level": "concat", 00:09:27.342 "superblock": false, 00:09:27.342 "num_base_bdevs": 3, 00:09:27.342 "num_base_bdevs_discovered": 1, 00:09:27.342 "num_base_bdevs_operational": 3, 00:09:27.342 "base_bdevs_list": [ 00:09:27.342 { 00:09:27.342 "name": "BaseBdev1", 00:09:27.342 "uuid": "e0a88704-6450-487f-ac7a-7c5b0584fcd1", 00:09:27.342 "is_configured": true, 00:09:27.342 "data_offset": 0, 00:09:27.343 "data_size": 65536 00:09:27.343 }, 00:09:27.343 { 00:09:27.343 "name": "BaseBdev2", 00:09:27.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.343 "is_configured": false, 00:09:27.343 "data_offset": 0, 00:09:27.343 "data_size": 0 00:09:27.343 }, 00:09:27.343 { 00:09:27.343 "name": "BaseBdev3", 00:09:27.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.343 "is_configured": false, 00:09:27.343 "data_offset": 0, 00:09:27.343 "data_size": 0 00:09:27.343 } 00:09:27.343 ] 00:09:27.343 }' 00:09:27.343 09:54:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.343 09:54:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.912 [2024-10-21 09:54:04.341377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.912 BaseBdev2 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.912 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.912 [ 00:09:27.912 { 00:09:27.912 "name": "BaseBdev2", 00:09:27.912 "aliases": [ 00:09:27.912 "8375a3d3-8c48-4278-8f7c-f23f10a48c26" 00:09:27.912 ], 00:09:27.913 "product_name": "Malloc disk", 00:09:27.913 "block_size": 512, 00:09:27.913 "num_blocks": 65536, 00:09:27.913 "uuid": "8375a3d3-8c48-4278-8f7c-f23f10a48c26", 00:09:27.913 "assigned_rate_limits": { 00:09:27.913 "rw_ios_per_sec": 0, 00:09:27.913 "rw_mbytes_per_sec": 0, 00:09:27.913 "r_mbytes_per_sec": 0, 00:09:27.913 "w_mbytes_per_sec": 0 00:09:27.913 }, 00:09:27.913 "claimed": true, 00:09:27.913 "claim_type": "exclusive_write", 00:09:27.913 "zoned": false, 00:09:27.913 "supported_io_types": { 00:09:27.913 "read": true, 00:09:27.913 "write": true, 00:09:27.913 "unmap": true, 00:09:27.913 "flush": true, 00:09:27.913 "reset": true, 00:09:27.913 "nvme_admin": false, 00:09:27.913 "nvme_io": false, 00:09:27.913 "nvme_io_md": false, 00:09:27.913 "write_zeroes": true, 00:09:27.913 "zcopy": true, 00:09:27.913 "get_zone_info": false, 00:09:27.913 "zone_management": false, 00:09:27.913 "zone_append": false, 00:09:27.913 "compare": false, 00:09:27.913 "compare_and_write": false, 00:09:27.913 "abort": true, 00:09:27.913 "seek_hole": false, 00:09:27.913 "seek_data": false, 00:09:27.913 "copy": true, 00:09:27.913 "nvme_iov_md": false 00:09:27.913 }, 00:09:27.913 "memory_domains": [ 00:09:27.913 { 00:09:27.913 "dma_device_id": "system", 00:09:27.913 "dma_device_type": 1 00:09:27.913 }, 00:09:27.913 { 00:09:27.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.913 "dma_device_type": 2 00:09:27.913 } 00:09:27.913 ], 00:09:27.913 "driver_specific": {} 00:09:27.913 } 00:09:27.913 ] 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.913 "name": "Existed_Raid", 00:09:27.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.913 "strip_size_kb": 64, 00:09:27.913 "state": "configuring", 00:09:27.913 "raid_level": "concat", 00:09:27.913 "superblock": false, 00:09:27.913 "num_base_bdevs": 3, 00:09:27.913 "num_base_bdevs_discovered": 2, 00:09:27.913 "num_base_bdevs_operational": 3, 00:09:27.913 "base_bdevs_list": [ 00:09:27.913 { 00:09:27.913 "name": "BaseBdev1", 00:09:27.913 "uuid": "e0a88704-6450-487f-ac7a-7c5b0584fcd1", 00:09:27.913 "is_configured": true, 00:09:27.913 "data_offset": 0, 00:09:27.913 "data_size": 65536 00:09:27.913 }, 00:09:27.913 { 00:09:27.913 "name": "BaseBdev2", 00:09:27.913 "uuid": "8375a3d3-8c48-4278-8f7c-f23f10a48c26", 00:09:27.913 "is_configured": true, 00:09:27.913 "data_offset": 0, 00:09:27.913 "data_size": 65536 00:09:27.913 }, 00:09:27.913 { 00:09:27.913 "name": "BaseBdev3", 00:09:27.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.913 "is_configured": false, 00:09:27.913 "data_offset": 0, 00:09:27.913 "data_size": 0 00:09:27.913 } 00:09:27.913 ] 00:09:27.913 }' 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.913 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.481 [2024-10-21 09:54:04.851097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.481 [2024-10-21 09:54:04.851229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:28.481 [2024-10-21 09:54:04.851252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:28.481 [2024-10-21 09:54:04.851545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:28.481 [2024-10-21 09:54:04.851784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:28.481 [2024-10-21 09:54:04.851795] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:09:28.481 [2024-10-21 09:54:04.852086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.481 BaseBdev3 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.481 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.481 [ 00:09:28.481 { 00:09:28.481 "name": "BaseBdev3", 00:09:28.481 "aliases": [ 00:09:28.481 "f7d0483e-56f6-46eb-8c05-c6d192f972db" 00:09:28.481 ], 00:09:28.481 "product_name": "Malloc disk", 00:09:28.481 "block_size": 512, 00:09:28.481 "num_blocks": 65536, 00:09:28.481 "uuid": "f7d0483e-56f6-46eb-8c05-c6d192f972db", 00:09:28.481 "assigned_rate_limits": { 00:09:28.481 "rw_ios_per_sec": 0, 00:09:28.481 "rw_mbytes_per_sec": 0, 00:09:28.481 "r_mbytes_per_sec": 0, 00:09:28.481 "w_mbytes_per_sec": 0 00:09:28.481 }, 00:09:28.481 "claimed": true, 00:09:28.481 "claim_type": "exclusive_write", 00:09:28.481 "zoned": false, 00:09:28.481 "supported_io_types": { 00:09:28.481 "read": true, 00:09:28.481 "write": true, 00:09:28.481 "unmap": true, 00:09:28.481 "flush": true, 00:09:28.481 "reset": true, 00:09:28.481 "nvme_admin": false, 00:09:28.481 "nvme_io": false, 00:09:28.481 "nvme_io_md": false, 00:09:28.481 "write_zeroes": true, 00:09:28.481 "zcopy": true, 00:09:28.482 "get_zone_info": false, 00:09:28.482 "zone_management": false, 00:09:28.482 "zone_append": false, 00:09:28.482 "compare": false, 00:09:28.482 "compare_and_write": false, 00:09:28.482 "abort": true, 00:09:28.482 "seek_hole": false, 00:09:28.482 "seek_data": false, 00:09:28.482 "copy": true, 00:09:28.482 "nvme_iov_md": false 00:09:28.482 }, 00:09:28.482 "memory_domains": [ 00:09:28.482 { 00:09:28.482 "dma_device_id": "system", 00:09:28.482 "dma_device_type": 1 00:09:28.482 }, 00:09:28.482 { 00:09:28.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.482 "dma_device_type": 2 00:09:28.482 } 00:09:28.482 ], 00:09:28.482 "driver_specific": {} 00:09:28.482 } 00:09:28.482 ] 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.482 "name": "Existed_Raid", 00:09:28.482 "uuid": "c4df18bd-7464-4414-b1e3-b2171102e2d3", 00:09:28.482 "strip_size_kb": 64, 00:09:28.482 "state": "online", 00:09:28.482 "raid_level": "concat", 00:09:28.482 "superblock": false, 00:09:28.482 "num_base_bdevs": 3, 00:09:28.482 "num_base_bdevs_discovered": 3, 00:09:28.482 "num_base_bdevs_operational": 3, 00:09:28.482 "base_bdevs_list": [ 00:09:28.482 { 00:09:28.482 "name": "BaseBdev1", 00:09:28.482 "uuid": "e0a88704-6450-487f-ac7a-7c5b0584fcd1", 00:09:28.482 "is_configured": true, 00:09:28.482 "data_offset": 0, 00:09:28.482 "data_size": 65536 00:09:28.482 }, 00:09:28.482 { 00:09:28.482 "name": "BaseBdev2", 00:09:28.482 "uuid": "8375a3d3-8c48-4278-8f7c-f23f10a48c26", 00:09:28.482 "is_configured": true, 00:09:28.482 "data_offset": 0, 00:09:28.482 "data_size": 65536 00:09:28.482 }, 00:09:28.482 { 00:09:28.482 "name": "BaseBdev3", 00:09:28.482 "uuid": "f7d0483e-56f6-46eb-8c05-c6d192f972db", 00:09:28.482 "is_configured": true, 00:09:28.482 "data_offset": 0, 00:09:28.482 "data_size": 65536 00:09:28.482 } 00:09:28.482 ] 00:09:28.482 }' 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.482 09:54:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.742 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.742 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.742 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.742 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.742 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.742 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.001 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.001 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.001 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.001 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.001 [2024-10-21 09:54:05.346861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.001 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.001 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.001 "name": "Existed_Raid", 00:09:29.001 "aliases": [ 00:09:29.001 "c4df18bd-7464-4414-b1e3-b2171102e2d3" 00:09:29.001 ], 00:09:29.001 "product_name": "Raid Volume", 00:09:29.001 "block_size": 512, 00:09:29.001 "num_blocks": 196608, 00:09:29.001 "uuid": "c4df18bd-7464-4414-b1e3-b2171102e2d3", 00:09:29.001 "assigned_rate_limits": { 00:09:29.001 "rw_ios_per_sec": 0, 00:09:29.001 "rw_mbytes_per_sec": 0, 00:09:29.001 "r_mbytes_per_sec": 0, 00:09:29.001 "w_mbytes_per_sec": 0 00:09:29.001 }, 00:09:29.001 "claimed": false, 00:09:29.001 "zoned": false, 00:09:29.001 "supported_io_types": { 00:09:29.001 "read": true, 00:09:29.001 "write": true, 00:09:29.001 "unmap": true, 00:09:29.001 "flush": true, 00:09:29.001 "reset": true, 00:09:29.001 "nvme_admin": false, 00:09:29.001 "nvme_io": false, 00:09:29.001 "nvme_io_md": false, 00:09:29.001 "write_zeroes": true, 00:09:29.001 "zcopy": false, 00:09:29.001 "get_zone_info": false, 00:09:29.001 "zone_management": false, 00:09:29.001 "zone_append": false, 00:09:29.001 "compare": false, 00:09:29.001 "compare_and_write": false, 00:09:29.001 "abort": false, 00:09:29.001 "seek_hole": false, 00:09:29.001 "seek_data": false, 00:09:29.001 "copy": false, 00:09:29.001 "nvme_iov_md": false 00:09:29.001 }, 00:09:29.001 "memory_domains": [ 00:09:29.001 { 00:09:29.001 "dma_device_id": "system", 00:09:29.001 "dma_device_type": 1 00:09:29.001 }, 00:09:29.001 { 00:09:29.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.001 "dma_device_type": 2 00:09:29.001 }, 00:09:29.001 { 00:09:29.001 "dma_device_id": "system", 00:09:29.001 "dma_device_type": 1 00:09:29.001 }, 00:09:29.001 { 00:09:29.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.001 "dma_device_type": 2 00:09:29.001 }, 00:09:29.001 { 00:09:29.001 "dma_device_id": "system", 00:09:29.001 "dma_device_type": 1 00:09:29.001 }, 00:09:29.001 { 00:09:29.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.001 "dma_device_type": 2 00:09:29.001 } 00:09:29.001 ], 00:09:29.001 "driver_specific": { 00:09:29.001 "raid": { 00:09:29.001 "uuid": "c4df18bd-7464-4414-b1e3-b2171102e2d3", 00:09:29.001 "strip_size_kb": 64, 00:09:29.001 "state": "online", 00:09:29.001 "raid_level": "concat", 00:09:29.001 "superblock": false, 00:09:29.001 "num_base_bdevs": 3, 00:09:29.001 "num_base_bdevs_discovered": 3, 00:09:29.001 "num_base_bdevs_operational": 3, 00:09:29.001 "base_bdevs_list": [ 00:09:29.001 { 00:09:29.001 "name": "BaseBdev1", 00:09:29.001 "uuid": "e0a88704-6450-487f-ac7a-7c5b0584fcd1", 00:09:29.001 "is_configured": true, 00:09:29.001 "data_offset": 0, 00:09:29.001 "data_size": 65536 00:09:29.001 }, 00:09:29.001 { 00:09:29.001 "name": "BaseBdev2", 00:09:29.001 "uuid": "8375a3d3-8c48-4278-8f7c-f23f10a48c26", 00:09:29.001 "is_configured": true, 00:09:29.001 "data_offset": 0, 00:09:29.001 "data_size": 65536 00:09:29.001 }, 00:09:29.001 { 00:09:29.001 "name": "BaseBdev3", 00:09:29.001 "uuid": "f7d0483e-56f6-46eb-8c05-c6d192f972db", 00:09:29.001 "is_configured": true, 00:09:29.001 "data_offset": 0, 00:09:29.002 "data_size": 65536 00:09:29.002 } 00:09:29.002 ] 00:09:29.002 } 00:09:29.002 } 00:09:29.002 }' 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:29.002 BaseBdev2 00:09:29.002 BaseBdev3' 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.002 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.261 [2024-10-21 09:54:05.606047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.261 [2024-10-21 09:54:05.606172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.261 [2024-10-21 09:54:05.606260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.261 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.262 "name": "Existed_Raid", 00:09:29.262 "uuid": "c4df18bd-7464-4414-b1e3-b2171102e2d3", 00:09:29.262 "strip_size_kb": 64, 00:09:29.262 "state": "offline", 00:09:29.262 "raid_level": "concat", 00:09:29.262 "superblock": false, 00:09:29.262 "num_base_bdevs": 3, 00:09:29.262 "num_base_bdevs_discovered": 2, 00:09:29.262 "num_base_bdevs_operational": 2, 00:09:29.262 "base_bdevs_list": [ 00:09:29.262 { 00:09:29.262 "name": null, 00:09:29.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.262 "is_configured": false, 00:09:29.262 "data_offset": 0, 00:09:29.262 "data_size": 65536 00:09:29.262 }, 00:09:29.262 { 00:09:29.262 "name": "BaseBdev2", 00:09:29.262 "uuid": "8375a3d3-8c48-4278-8f7c-f23f10a48c26", 00:09:29.262 "is_configured": true, 00:09:29.262 "data_offset": 0, 00:09:29.262 "data_size": 65536 00:09:29.262 }, 00:09:29.262 { 00:09:29.262 "name": "BaseBdev3", 00:09:29.262 "uuid": "f7d0483e-56f6-46eb-8c05-c6d192f972db", 00:09:29.262 "is_configured": true, 00:09:29.262 "data_offset": 0, 00:09:29.262 "data_size": 65536 00:09:29.262 } 00:09:29.262 ] 00:09:29.262 }' 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.262 09:54:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.832 [2024-10-21 09:54:06.183409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.832 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.832 [2024-10-21 09:54:06.344831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:29.832 [2024-10-21 09:54:06.344903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 BaseBdev2 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.092 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.092 [ 00:09:30.092 { 00:09:30.092 "name": "BaseBdev2", 00:09:30.092 "aliases": [ 00:09:30.092 "3a375702-319a-4c7f-b1b2-c64be3b69f29" 00:09:30.092 ], 00:09:30.092 "product_name": "Malloc disk", 00:09:30.092 "block_size": 512, 00:09:30.092 "num_blocks": 65536, 00:09:30.092 "uuid": "3a375702-319a-4c7f-b1b2-c64be3b69f29", 00:09:30.092 "assigned_rate_limits": { 00:09:30.092 "rw_ios_per_sec": 0, 00:09:30.092 "rw_mbytes_per_sec": 0, 00:09:30.092 "r_mbytes_per_sec": 0, 00:09:30.092 "w_mbytes_per_sec": 0 00:09:30.092 }, 00:09:30.092 "claimed": false, 00:09:30.092 "zoned": false, 00:09:30.092 "supported_io_types": { 00:09:30.092 "read": true, 00:09:30.092 "write": true, 00:09:30.092 "unmap": true, 00:09:30.092 "flush": true, 00:09:30.092 "reset": true, 00:09:30.092 "nvme_admin": false, 00:09:30.092 "nvme_io": false, 00:09:30.092 "nvme_io_md": false, 00:09:30.092 "write_zeroes": true, 00:09:30.092 "zcopy": true, 00:09:30.092 "get_zone_info": false, 00:09:30.092 "zone_management": false, 00:09:30.092 "zone_append": false, 00:09:30.092 "compare": false, 00:09:30.092 "compare_and_write": false, 00:09:30.092 "abort": true, 00:09:30.093 "seek_hole": false, 00:09:30.093 "seek_data": false, 00:09:30.093 "copy": true, 00:09:30.093 "nvme_iov_md": false 00:09:30.093 }, 00:09:30.093 "memory_domains": [ 00:09:30.093 { 00:09:30.093 "dma_device_id": "system", 00:09:30.093 "dma_device_type": 1 00:09:30.093 }, 00:09:30.093 { 00:09:30.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.093 "dma_device_type": 2 00:09:30.093 } 00:09:30.093 ], 00:09:30.093 "driver_specific": {} 00:09:30.093 } 00:09:30.093 ] 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.093 BaseBdev3 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.093 [ 00:09:30.093 { 00:09:30.093 "name": "BaseBdev3", 00:09:30.093 "aliases": [ 00:09:30.093 "251e941a-d072-4d25-91b6-0d65d0a9c753" 00:09:30.093 ], 00:09:30.093 "product_name": "Malloc disk", 00:09:30.093 "block_size": 512, 00:09:30.093 "num_blocks": 65536, 00:09:30.093 "uuid": "251e941a-d072-4d25-91b6-0d65d0a9c753", 00:09:30.093 "assigned_rate_limits": { 00:09:30.093 "rw_ios_per_sec": 0, 00:09:30.093 "rw_mbytes_per_sec": 0, 00:09:30.093 "r_mbytes_per_sec": 0, 00:09:30.093 "w_mbytes_per_sec": 0 00:09:30.093 }, 00:09:30.093 "claimed": false, 00:09:30.093 "zoned": false, 00:09:30.093 "supported_io_types": { 00:09:30.093 "read": true, 00:09:30.093 "write": true, 00:09:30.093 "unmap": true, 00:09:30.093 "flush": true, 00:09:30.093 "reset": true, 00:09:30.093 "nvme_admin": false, 00:09:30.093 "nvme_io": false, 00:09:30.093 "nvme_io_md": false, 00:09:30.093 "write_zeroes": true, 00:09:30.093 "zcopy": true, 00:09:30.093 "get_zone_info": false, 00:09:30.093 "zone_management": false, 00:09:30.093 "zone_append": false, 00:09:30.093 "compare": false, 00:09:30.093 "compare_and_write": false, 00:09:30.093 "abort": true, 00:09:30.093 "seek_hole": false, 00:09:30.093 "seek_data": false, 00:09:30.093 "copy": true, 00:09:30.093 "nvme_iov_md": false 00:09:30.093 }, 00:09:30.093 "memory_domains": [ 00:09:30.093 { 00:09:30.093 "dma_device_id": "system", 00:09:30.093 "dma_device_type": 1 00:09:30.093 }, 00:09:30.093 { 00:09:30.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.093 "dma_device_type": 2 00:09:30.093 } 00:09:30.093 ], 00:09:30.093 "driver_specific": {} 00:09:30.093 } 00:09:30.093 ] 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.093 [2024-10-21 09:54:06.673145] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.093 [2024-10-21 09:54:06.673274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.093 [2024-10-21 09:54:06.673315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.093 [2024-10-21 09:54:06.675339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.093 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.353 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.353 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.353 "name": "Existed_Raid", 00:09:30.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.353 "strip_size_kb": 64, 00:09:30.353 "state": "configuring", 00:09:30.353 "raid_level": "concat", 00:09:30.353 "superblock": false, 00:09:30.353 "num_base_bdevs": 3, 00:09:30.353 "num_base_bdevs_discovered": 2, 00:09:30.353 "num_base_bdevs_operational": 3, 00:09:30.353 "base_bdevs_list": [ 00:09:30.353 { 00:09:30.353 "name": "BaseBdev1", 00:09:30.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.353 "is_configured": false, 00:09:30.353 "data_offset": 0, 00:09:30.353 "data_size": 0 00:09:30.353 }, 00:09:30.353 { 00:09:30.353 "name": "BaseBdev2", 00:09:30.353 "uuid": "3a375702-319a-4c7f-b1b2-c64be3b69f29", 00:09:30.353 "is_configured": true, 00:09:30.353 "data_offset": 0, 00:09:30.353 "data_size": 65536 00:09:30.353 }, 00:09:30.353 { 00:09:30.353 "name": "BaseBdev3", 00:09:30.353 "uuid": "251e941a-d072-4d25-91b6-0d65d0a9c753", 00:09:30.353 "is_configured": true, 00:09:30.353 "data_offset": 0, 00:09:30.353 "data_size": 65536 00:09:30.353 } 00:09:30.353 ] 00:09:30.353 }' 00:09:30.353 09:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.353 09:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.614 [2024-10-21 09:54:07.140291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.614 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.615 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.615 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.615 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.615 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.615 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.615 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.615 "name": "Existed_Raid", 00:09:30.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.615 "strip_size_kb": 64, 00:09:30.615 "state": "configuring", 00:09:30.615 "raid_level": "concat", 00:09:30.615 "superblock": false, 00:09:30.615 "num_base_bdevs": 3, 00:09:30.615 "num_base_bdevs_discovered": 1, 00:09:30.615 "num_base_bdevs_operational": 3, 00:09:30.615 "base_bdevs_list": [ 00:09:30.615 { 00:09:30.615 "name": "BaseBdev1", 00:09:30.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.615 "is_configured": false, 00:09:30.615 "data_offset": 0, 00:09:30.615 "data_size": 0 00:09:30.615 }, 00:09:30.615 { 00:09:30.615 "name": null, 00:09:30.615 "uuid": "3a375702-319a-4c7f-b1b2-c64be3b69f29", 00:09:30.615 "is_configured": false, 00:09:30.615 "data_offset": 0, 00:09:30.615 "data_size": 65536 00:09:30.615 }, 00:09:30.615 { 00:09:30.615 "name": "BaseBdev3", 00:09:30.615 "uuid": "251e941a-d072-4d25-91b6-0d65d0a9c753", 00:09:30.615 "is_configured": true, 00:09:30.615 "data_offset": 0, 00:09:30.615 "data_size": 65536 00:09:30.615 } 00:09:30.615 ] 00:09:30.615 }' 00:09:30.615 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.615 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.183 [2024-10-21 09:54:07.680785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.183 BaseBdev1 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.183 [ 00:09:31.183 { 00:09:31.183 "name": "BaseBdev1", 00:09:31.183 "aliases": [ 00:09:31.183 "1250807a-b4eb-4de5-83ea-abadc13eef57" 00:09:31.183 ], 00:09:31.183 "product_name": "Malloc disk", 00:09:31.183 "block_size": 512, 00:09:31.183 "num_blocks": 65536, 00:09:31.183 "uuid": "1250807a-b4eb-4de5-83ea-abadc13eef57", 00:09:31.183 "assigned_rate_limits": { 00:09:31.183 "rw_ios_per_sec": 0, 00:09:31.183 "rw_mbytes_per_sec": 0, 00:09:31.183 "r_mbytes_per_sec": 0, 00:09:31.183 "w_mbytes_per_sec": 0 00:09:31.183 }, 00:09:31.183 "claimed": true, 00:09:31.183 "claim_type": "exclusive_write", 00:09:31.183 "zoned": false, 00:09:31.183 "supported_io_types": { 00:09:31.183 "read": true, 00:09:31.183 "write": true, 00:09:31.183 "unmap": true, 00:09:31.183 "flush": true, 00:09:31.183 "reset": true, 00:09:31.183 "nvme_admin": false, 00:09:31.183 "nvme_io": false, 00:09:31.183 "nvme_io_md": false, 00:09:31.183 "write_zeroes": true, 00:09:31.183 "zcopy": true, 00:09:31.183 "get_zone_info": false, 00:09:31.183 "zone_management": false, 00:09:31.183 "zone_append": false, 00:09:31.183 "compare": false, 00:09:31.183 "compare_and_write": false, 00:09:31.183 "abort": true, 00:09:31.183 "seek_hole": false, 00:09:31.183 "seek_data": false, 00:09:31.183 "copy": true, 00:09:31.183 "nvme_iov_md": false 00:09:31.183 }, 00:09:31.183 "memory_domains": [ 00:09:31.183 { 00:09:31.183 "dma_device_id": "system", 00:09:31.183 "dma_device_type": 1 00:09:31.183 }, 00:09:31.183 { 00:09:31.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.183 "dma_device_type": 2 00:09:31.183 } 00:09:31.183 ], 00:09:31.183 "driver_specific": {} 00:09:31.183 } 00:09:31.183 ] 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.183 "name": "Existed_Raid", 00:09:31.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.183 "strip_size_kb": 64, 00:09:31.183 "state": "configuring", 00:09:31.183 "raid_level": "concat", 00:09:31.183 "superblock": false, 00:09:31.183 "num_base_bdevs": 3, 00:09:31.183 "num_base_bdevs_discovered": 2, 00:09:31.183 "num_base_bdevs_operational": 3, 00:09:31.183 "base_bdevs_list": [ 00:09:31.183 { 00:09:31.183 "name": "BaseBdev1", 00:09:31.183 "uuid": "1250807a-b4eb-4de5-83ea-abadc13eef57", 00:09:31.183 "is_configured": true, 00:09:31.183 "data_offset": 0, 00:09:31.183 "data_size": 65536 00:09:31.183 }, 00:09:31.183 { 00:09:31.183 "name": null, 00:09:31.183 "uuid": "3a375702-319a-4c7f-b1b2-c64be3b69f29", 00:09:31.183 "is_configured": false, 00:09:31.183 "data_offset": 0, 00:09:31.183 "data_size": 65536 00:09:31.183 }, 00:09:31.183 { 00:09:31.183 "name": "BaseBdev3", 00:09:31.183 "uuid": "251e941a-d072-4d25-91b6-0d65d0a9c753", 00:09:31.183 "is_configured": true, 00:09:31.183 "data_offset": 0, 00:09:31.183 "data_size": 65536 00:09:31.183 } 00:09:31.183 ] 00:09:31.183 }' 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.183 09:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.752 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:31.752 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.752 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.752 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.752 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.752 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:31.752 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:31.752 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.753 [2024-10-21 09:54:08.187972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.753 "name": "Existed_Raid", 00:09:31.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.753 "strip_size_kb": 64, 00:09:31.753 "state": "configuring", 00:09:31.753 "raid_level": "concat", 00:09:31.753 "superblock": false, 00:09:31.753 "num_base_bdevs": 3, 00:09:31.753 "num_base_bdevs_discovered": 1, 00:09:31.753 "num_base_bdevs_operational": 3, 00:09:31.753 "base_bdevs_list": [ 00:09:31.753 { 00:09:31.753 "name": "BaseBdev1", 00:09:31.753 "uuid": "1250807a-b4eb-4de5-83ea-abadc13eef57", 00:09:31.753 "is_configured": true, 00:09:31.753 "data_offset": 0, 00:09:31.753 "data_size": 65536 00:09:31.753 }, 00:09:31.753 { 00:09:31.753 "name": null, 00:09:31.753 "uuid": "3a375702-319a-4c7f-b1b2-c64be3b69f29", 00:09:31.753 "is_configured": false, 00:09:31.753 "data_offset": 0, 00:09:31.753 "data_size": 65536 00:09:31.753 }, 00:09:31.753 { 00:09:31.753 "name": null, 00:09:31.753 "uuid": "251e941a-d072-4d25-91b6-0d65d0a9c753", 00:09:31.753 "is_configured": false, 00:09:31.753 "data_offset": 0, 00:09:31.753 "data_size": 65536 00:09:31.753 } 00:09:31.753 ] 00:09:31.753 }' 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.753 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.322 [2024-10-21 09:54:08.667212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.322 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.323 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.323 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.323 "name": "Existed_Raid", 00:09:32.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.323 "strip_size_kb": 64, 00:09:32.323 "state": "configuring", 00:09:32.323 "raid_level": "concat", 00:09:32.323 "superblock": false, 00:09:32.323 "num_base_bdevs": 3, 00:09:32.323 "num_base_bdevs_discovered": 2, 00:09:32.323 "num_base_bdevs_operational": 3, 00:09:32.323 "base_bdevs_list": [ 00:09:32.323 { 00:09:32.323 "name": "BaseBdev1", 00:09:32.323 "uuid": "1250807a-b4eb-4de5-83ea-abadc13eef57", 00:09:32.323 "is_configured": true, 00:09:32.323 "data_offset": 0, 00:09:32.323 "data_size": 65536 00:09:32.323 }, 00:09:32.323 { 00:09:32.323 "name": null, 00:09:32.323 "uuid": "3a375702-319a-4c7f-b1b2-c64be3b69f29", 00:09:32.323 "is_configured": false, 00:09:32.323 "data_offset": 0, 00:09:32.323 "data_size": 65536 00:09:32.323 }, 00:09:32.323 { 00:09:32.323 "name": "BaseBdev3", 00:09:32.323 "uuid": "251e941a-d072-4d25-91b6-0d65d0a9c753", 00:09:32.323 "is_configured": true, 00:09:32.323 "data_offset": 0, 00:09:32.323 "data_size": 65536 00:09:32.323 } 00:09:32.323 ] 00:09:32.323 }' 00:09:32.323 09:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.323 09:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.583 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:32.583 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.583 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.583 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.584 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.584 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:32.584 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:32.584 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.584 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.584 [2024-10-21 09:54:09.166381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.844 "name": "Existed_Raid", 00:09:32.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.844 "strip_size_kb": 64, 00:09:32.844 "state": "configuring", 00:09:32.844 "raid_level": "concat", 00:09:32.844 "superblock": false, 00:09:32.844 "num_base_bdevs": 3, 00:09:32.844 "num_base_bdevs_discovered": 1, 00:09:32.844 "num_base_bdevs_operational": 3, 00:09:32.844 "base_bdevs_list": [ 00:09:32.844 { 00:09:32.844 "name": null, 00:09:32.844 "uuid": "1250807a-b4eb-4de5-83ea-abadc13eef57", 00:09:32.844 "is_configured": false, 00:09:32.844 "data_offset": 0, 00:09:32.844 "data_size": 65536 00:09:32.844 }, 00:09:32.844 { 00:09:32.844 "name": null, 00:09:32.844 "uuid": "3a375702-319a-4c7f-b1b2-c64be3b69f29", 00:09:32.844 "is_configured": false, 00:09:32.844 "data_offset": 0, 00:09:32.844 "data_size": 65536 00:09:32.844 }, 00:09:32.844 { 00:09:32.844 "name": "BaseBdev3", 00:09:32.844 "uuid": "251e941a-d072-4d25-91b6-0d65d0a9c753", 00:09:32.844 "is_configured": true, 00:09:32.844 "data_offset": 0, 00:09:32.844 "data_size": 65536 00:09:32.844 } 00:09:32.844 ] 00:09:32.844 }' 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.844 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.413 [2024-10-21 09:54:09.782682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.413 "name": "Existed_Raid", 00:09:33.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.413 "strip_size_kb": 64, 00:09:33.413 "state": "configuring", 00:09:33.413 "raid_level": "concat", 00:09:33.413 "superblock": false, 00:09:33.413 "num_base_bdevs": 3, 00:09:33.413 "num_base_bdevs_discovered": 2, 00:09:33.413 "num_base_bdevs_operational": 3, 00:09:33.413 "base_bdevs_list": [ 00:09:33.413 { 00:09:33.413 "name": null, 00:09:33.413 "uuid": "1250807a-b4eb-4de5-83ea-abadc13eef57", 00:09:33.413 "is_configured": false, 00:09:33.413 "data_offset": 0, 00:09:33.413 "data_size": 65536 00:09:33.413 }, 00:09:33.413 { 00:09:33.413 "name": "BaseBdev2", 00:09:33.413 "uuid": "3a375702-319a-4c7f-b1b2-c64be3b69f29", 00:09:33.413 "is_configured": true, 00:09:33.413 "data_offset": 0, 00:09:33.413 "data_size": 65536 00:09:33.413 }, 00:09:33.413 { 00:09:33.413 "name": "BaseBdev3", 00:09:33.413 "uuid": "251e941a-d072-4d25-91b6-0d65d0a9c753", 00:09:33.413 "is_configured": true, 00:09:33.413 "data_offset": 0, 00:09:33.413 "data_size": 65536 00:09:33.413 } 00:09:33.413 ] 00:09:33.413 }' 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.413 09:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.672 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.672 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.672 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.672 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.672 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.672 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1250807a-b4eb-4de5-83ea-abadc13eef57 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.933 [2024-10-21 09:54:10.342326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:33.933 [2024-10-21 09:54:10.342376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:09:33.933 [2024-10-21 09:54:10.342385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:33.933 [2024-10-21 09:54:10.342715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:33.933 [2024-10-21 09:54:10.342895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:09:33.933 [2024-10-21 09:54:10.342912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:09:33.933 [2024-10-21 09:54:10.343187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.933 NewBaseBdev 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:33.933 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.934 [ 00:09:33.934 { 00:09:33.934 "name": "NewBaseBdev", 00:09:33.934 "aliases": [ 00:09:33.934 "1250807a-b4eb-4de5-83ea-abadc13eef57" 00:09:33.934 ], 00:09:33.934 "product_name": "Malloc disk", 00:09:33.934 "block_size": 512, 00:09:33.934 "num_blocks": 65536, 00:09:33.934 "uuid": "1250807a-b4eb-4de5-83ea-abadc13eef57", 00:09:33.934 "assigned_rate_limits": { 00:09:33.934 "rw_ios_per_sec": 0, 00:09:33.934 "rw_mbytes_per_sec": 0, 00:09:33.934 "r_mbytes_per_sec": 0, 00:09:33.934 "w_mbytes_per_sec": 0 00:09:33.934 }, 00:09:33.934 "claimed": true, 00:09:33.934 "claim_type": "exclusive_write", 00:09:33.934 "zoned": false, 00:09:33.934 "supported_io_types": { 00:09:33.934 "read": true, 00:09:33.934 "write": true, 00:09:33.934 "unmap": true, 00:09:33.934 "flush": true, 00:09:33.934 "reset": true, 00:09:33.934 "nvme_admin": false, 00:09:33.934 "nvme_io": false, 00:09:33.934 "nvme_io_md": false, 00:09:33.934 "write_zeroes": true, 00:09:33.934 "zcopy": true, 00:09:33.934 "get_zone_info": false, 00:09:33.934 "zone_management": false, 00:09:33.934 "zone_append": false, 00:09:33.934 "compare": false, 00:09:33.934 "compare_and_write": false, 00:09:33.934 "abort": true, 00:09:33.934 "seek_hole": false, 00:09:33.934 "seek_data": false, 00:09:33.934 "copy": true, 00:09:33.934 "nvme_iov_md": false 00:09:33.934 }, 00:09:33.934 "memory_domains": [ 00:09:33.934 { 00:09:33.934 "dma_device_id": "system", 00:09:33.934 "dma_device_type": 1 00:09:33.934 }, 00:09:33.934 { 00:09:33.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.934 "dma_device_type": 2 00:09:33.934 } 00:09:33.934 ], 00:09:33.934 "driver_specific": {} 00:09:33.934 } 00:09:33.934 ] 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.934 "name": "Existed_Raid", 00:09:33.934 "uuid": "df05b3d5-18e5-4f63-922b-81e743b18a50", 00:09:33.934 "strip_size_kb": 64, 00:09:33.934 "state": "online", 00:09:33.934 "raid_level": "concat", 00:09:33.934 "superblock": false, 00:09:33.934 "num_base_bdevs": 3, 00:09:33.934 "num_base_bdevs_discovered": 3, 00:09:33.934 "num_base_bdevs_operational": 3, 00:09:33.934 "base_bdevs_list": [ 00:09:33.934 { 00:09:33.934 "name": "NewBaseBdev", 00:09:33.934 "uuid": "1250807a-b4eb-4de5-83ea-abadc13eef57", 00:09:33.934 "is_configured": true, 00:09:33.934 "data_offset": 0, 00:09:33.934 "data_size": 65536 00:09:33.934 }, 00:09:33.934 { 00:09:33.934 "name": "BaseBdev2", 00:09:33.934 "uuid": "3a375702-319a-4c7f-b1b2-c64be3b69f29", 00:09:33.934 "is_configured": true, 00:09:33.934 "data_offset": 0, 00:09:33.934 "data_size": 65536 00:09:33.934 }, 00:09:33.934 { 00:09:33.934 "name": "BaseBdev3", 00:09:33.934 "uuid": "251e941a-d072-4d25-91b6-0d65d0a9c753", 00:09:33.934 "is_configured": true, 00:09:33.934 "data_offset": 0, 00:09:33.934 "data_size": 65536 00:09:33.934 } 00:09:33.934 ] 00:09:33.934 }' 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.934 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.516 [2024-10-21 09:54:10.829922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.516 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.516 "name": "Existed_Raid", 00:09:34.516 "aliases": [ 00:09:34.516 "df05b3d5-18e5-4f63-922b-81e743b18a50" 00:09:34.516 ], 00:09:34.516 "product_name": "Raid Volume", 00:09:34.516 "block_size": 512, 00:09:34.516 "num_blocks": 196608, 00:09:34.516 "uuid": "df05b3d5-18e5-4f63-922b-81e743b18a50", 00:09:34.516 "assigned_rate_limits": { 00:09:34.516 "rw_ios_per_sec": 0, 00:09:34.516 "rw_mbytes_per_sec": 0, 00:09:34.516 "r_mbytes_per_sec": 0, 00:09:34.516 "w_mbytes_per_sec": 0 00:09:34.516 }, 00:09:34.516 "claimed": false, 00:09:34.516 "zoned": false, 00:09:34.516 "supported_io_types": { 00:09:34.516 "read": true, 00:09:34.516 "write": true, 00:09:34.516 "unmap": true, 00:09:34.516 "flush": true, 00:09:34.516 "reset": true, 00:09:34.516 "nvme_admin": false, 00:09:34.516 "nvme_io": false, 00:09:34.516 "nvme_io_md": false, 00:09:34.516 "write_zeroes": true, 00:09:34.516 "zcopy": false, 00:09:34.516 "get_zone_info": false, 00:09:34.516 "zone_management": false, 00:09:34.516 "zone_append": false, 00:09:34.516 "compare": false, 00:09:34.516 "compare_and_write": false, 00:09:34.516 "abort": false, 00:09:34.516 "seek_hole": false, 00:09:34.516 "seek_data": false, 00:09:34.516 "copy": false, 00:09:34.516 "nvme_iov_md": false 00:09:34.516 }, 00:09:34.517 "memory_domains": [ 00:09:34.517 { 00:09:34.517 "dma_device_id": "system", 00:09:34.517 "dma_device_type": 1 00:09:34.517 }, 00:09:34.517 { 00:09:34.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.517 "dma_device_type": 2 00:09:34.517 }, 00:09:34.517 { 00:09:34.517 "dma_device_id": "system", 00:09:34.517 "dma_device_type": 1 00:09:34.517 }, 00:09:34.517 { 00:09:34.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.517 "dma_device_type": 2 00:09:34.517 }, 00:09:34.517 { 00:09:34.517 "dma_device_id": "system", 00:09:34.517 "dma_device_type": 1 00:09:34.517 }, 00:09:34.517 { 00:09:34.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.517 "dma_device_type": 2 00:09:34.517 } 00:09:34.517 ], 00:09:34.517 "driver_specific": { 00:09:34.517 "raid": { 00:09:34.517 "uuid": "df05b3d5-18e5-4f63-922b-81e743b18a50", 00:09:34.517 "strip_size_kb": 64, 00:09:34.517 "state": "online", 00:09:34.517 "raid_level": "concat", 00:09:34.517 "superblock": false, 00:09:34.517 "num_base_bdevs": 3, 00:09:34.517 "num_base_bdevs_discovered": 3, 00:09:34.517 "num_base_bdevs_operational": 3, 00:09:34.517 "base_bdevs_list": [ 00:09:34.517 { 00:09:34.517 "name": "NewBaseBdev", 00:09:34.517 "uuid": "1250807a-b4eb-4de5-83ea-abadc13eef57", 00:09:34.517 "is_configured": true, 00:09:34.517 "data_offset": 0, 00:09:34.517 "data_size": 65536 00:09:34.517 }, 00:09:34.517 { 00:09:34.517 "name": "BaseBdev2", 00:09:34.517 "uuid": "3a375702-319a-4c7f-b1b2-c64be3b69f29", 00:09:34.517 "is_configured": true, 00:09:34.517 "data_offset": 0, 00:09:34.517 "data_size": 65536 00:09:34.517 }, 00:09:34.517 { 00:09:34.517 "name": "BaseBdev3", 00:09:34.517 "uuid": "251e941a-d072-4d25-91b6-0d65d0a9c753", 00:09:34.517 "is_configured": true, 00:09:34.517 "data_offset": 0, 00:09:34.517 "data_size": 65536 00:09:34.517 } 00:09:34.517 ] 00:09:34.517 } 00:09:34.517 } 00:09:34.517 }' 00:09:34.517 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.517 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:34.517 BaseBdev2 00:09:34.517 BaseBdev3' 00:09:34.517 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.517 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.517 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.517 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:34.517 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.517 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.517 09:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.517 09:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.517 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.804 [2024-10-21 09:54:11.121072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.804 [2024-10-21 09:54:11.121116] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.804 [2024-10-21 09:54:11.121212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.804 [2024-10-21 09:54:11.121276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.804 [2024-10-21 09:54:11.121289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65192 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65192 ']' 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65192 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65192 00:09:34.804 killing process with pid 65192 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65192' 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65192 00:09:34.804 [2024-10-21 09:54:11.162977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.804 09:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65192 00:09:35.064 [2024-10-21 09:54:11.491207] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.444 09:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:36.444 00:09:36.444 real 0m10.769s 00:09:36.444 user 0m16.914s 00:09:36.444 sys 0m1.883s 00:09:36.444 09:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.444 ************************************ 00:09:36.444 END TEST raid_state_function_test 00:09:36.444 ************************************ 00:09:36.444 09:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 09:54:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:36.444 09:54:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:36.444 09:54:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.444 09:54:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 ************************************ 00:09:36.444 START TEST raid_state_function_test_sb 00:09:36.444 ************************************ 00:09:36.444 09:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:36.444 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:36.444 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:36.444 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=65813 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:36.445 Process raid pid: 65813 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65813' 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 65813 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 65813 ']' 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.445 09:54:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.445 [2024-10-21 09:54:12.888952] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:36.445 [2024-10-21 09:54:12.889084] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.704 [2024-10-21 09:54:13.047996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.704 [2024-10-21 09:54:13.186975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.964 [2024-10-21 09:54:13.445708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.964 [2024-10-21 09:54:13.445757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.224 [2024-10-21 09:54:13.724560] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.224 [2024-10-21 09:54:13.724640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.224 [2024-10-21 09:54:13.724649] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.224 [2024-10-21 09:54:13.724659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.224 [2024-10-21 09:54:13.724666] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.224 [2024-10-21 09:54:13.724674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.224 "name": "Existed_Raid", 00:09:37.224 "uuid": "61cc9a74-8c27-4548-a8f0-e9a05f16a9b8", 00:09:37.224 "strip_size_kb": 64, 00:09:37.224 "state": "configuring", 00:09:37.224 "raid_level": "concat", 00:09:37.224 "superblock": true, 00:09:37.224 "num_base_bdevs": 3, 00:09:37.224 "num_base_bdevs_discovered": 0, 00:09:37.224 "num_base_bdevs_operational": 3, 00:09:37.224 "base_bdevs_list": [ 00:09:37.224 { 00:09:37.224 "name": "BaseBdev1", 00:09:37.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.224 "is_configured": false, 00:09:37.224 "data_offset": 0, 00:09:37.224 "data_size": 0 00:09:37.224 }, 00:09:37.224 { 00:09:37.224 "name": "BaseBdev2", 00:09:37.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.224 "is_configured": false, 00:09:37.224 "data_offset": 0, 00:09:37.224 "data_size": 0 00:09:37.224 }, 00:09:37.224 { 00:09:37.224 "name": "BaseBdev3", 00:09:37.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.224 "is_configured": false, 00:09:37.224 "data_offset": 0, 00:09:37.224 "data_size": 0 00:09:37.224 } 00:09:37.224 ] 00:09:37.224 }' 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.224 09:54:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.803 [2024-10-21 09:54:14.135803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.803 [2024-10-21 09:54:14.135912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.803 [2024-10-21 09:54:14.143822] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.803 [2024-10-21 09:54:14.143908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.803 [2024-10-21 09:54:14.143937] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.803 [2024-10-21 09:54:14.143959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.803 [2024-10-21 09:54:14.143977] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.803 [2024-10-21 09:54:14.143998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.803 [2024-10-21 09:54:14.191059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.803 BaseBdev1 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.803 [ 00:09:37.803 { 00:09:37.803 "name": "BaseBdev1", 00:09:37.803 "aliases": [ 00:09:37.803 "bc060c52-6aea-44a5-a956-a2d76691f617" 00:09:37.803 ], 00:09:37.803 "product_name": "Malloc disk", 00:09:37.803 "block_size": 512, 00:09:37.803 "num_blocks": 65536, 00:09:37.803 "uuid": "bc060c52-6aea-44a5-a956-a2d76691f617", 00:09:37.803 "assigned_rate_limits": { 00:09:37.803 "rw_ios_per_sec": 0, 00:09:37.803 "rw_mbytes_per_sec": 0, 00:09:37.803 "r_mbytes_per_sec": 0, 00:09:37.803 "w_mbytes_per_sec": 0 00:09:37.803 }, 00:09:37.803 "claimed": true, 00:09:37.803 "claim_type": "exclusive_write", 00:09:37.803 "zoned": false, 00:09:37.803 "supported_io_types": { 00:09:37.803 "read": true, 00:09:37.803 "write": true, 00:09:37.803 "unmap": true, 00:09:37.803 "flush": true, 00:09:37.803 "reset": true, 00:09:37.803 "nvme_admin": false, 00:09:37.803 "nvme_io": false, 00:09:37.803 "nvme_io_md": false, 00:09:37.803 "write_zeroes": true, 00:09:37.803 "zcopy": true, 00:09:37.803 "get_zone_info": false, 00:09:37.803 "zone_management": false, 00:09:37.803 "zone_append": false, 00:09:37.803 "compare": false, 00:09:37.803 "compare_and_write": false, 00:09:37.803 "abort": true, 00:09:37.803 "seek_hole": false, 00:09:37.803 "seek_data": false, 00:09:37.803 "copy": true, 00:09:37.803 "nvme_iov_md": false 00:09:37.803 }, 00:09:37.803 "memory_domains": [ 00:09:37.803 { 00:09:37.803 "dma_device_id": "system", 00:09:37.803 "dma_device_type": 1 00:09:37.803 }, 00:09:37.803 { 00:09:37.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.803 "dma_device_type": 2 00:09:37.803 } 00:09:37.803 ], 00:09:37.803 "driver_specific": {} 00:09:37.803 } 00:09:37.803 ] 00:09:37.803 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.804 "name": "Existed_Raid", 00:09:37.804 "uuid": "5c4c0cd1-8bab-4d4e-989d-c2d07ec02f26", 00:09:37.804 "strip_size_kb": 64, 00:09:37.804 "state": "configuring", 00:09:37.804 "raid_level": "concat", 00:09:37.804 "superblock": true, 00:09:37.804 "num_base_bdevs": 3, 00:09:37.804 "num_base_bdevs_discovered": 1, 00:09:37.804 "num_base_bdevs_operational": 3, 00:09:37.804 "base_bdevs_list": [ 00:09:37.804 { 00:09:37.804 "name": "BaseBdev1", 00:09:37.804 "uuid": "bc060c52-6aea-44a5-a956-a2d76691f617", 00:09:37.804 "is_configured": true, 00:09:37.804 "data_offset": 2048, 00:09:37.804 "data_size": 63488 00:09:37.804 }, 00:09:37.804 { 00:09:37.804 "name": "BaseBdev2", 00:09:37.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.804 "is_configured": false, 00:09:37.804 "data_offset": 0, 00:09:37.804 "data_size": 0 00:09:37.804 }, 00:09:37.804 { 00:09:37.804 "name": "BaseBdev3", 00:09:37.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.804 "is_configured": false, 00:09:37.804 "data_offset": 0, 00:09:37.804 "data_size": 0 00:09:37.804 } 00:09:37.804 ] 00:09:37.804 }' 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.804 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.064 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.064 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.064 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.064 [2024-10-21 09:54:14.642344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.064 [2024-10-21 09:54:14.642399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:09:38.064 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.064 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:38.064 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.064 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.064 [2024-10-21 09:54:14.654389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.064 [2024-10-21 09:54:14.656433] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.064 [2024-10-21 09:54:14.656503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.064 [2024-10-21 09:54:14.656513] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.064 [2024-10-21 09:54:14.656523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.323 "name": "Existed_Raid", 00:09:38.323 "uuid": "83940bb5-c2a0-4feb-afaf-a456ab90d319", 00:09:38.323 "strip_size_kb": 64, 00:09:38.323 "state": "configuring", 00:09:38.323 "raid_level": "concat", 00:09:38.323 "superblock": true, 00:09:38.323 "num_base_bdevs": 3, 00:09:38.323 "num_base_bdevs_discovered": 1, 00:09:38.323 "num_base_bdevs_operational": 3, 00:09:38.323 "base_bdevs_list": [ 00:09:38.323 { 00:09:38.323 "name": "BaseBdev1", 00:09:38.323 "uuid": "bc060c52-6aea-44a5-a956-a2d76691f617", 00:09:38.323 "is_configured": true, 00:09:38.323 "data_offset": 2048, 00:09:38.323 "data_size": 63488 00:09:38.323 }, 00:09:38.323 { 00:09:38.323 "name": "BaseBdev2", 00:09:38.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.323 "is_configured": false, 00:09:38.323 "data_offset": 0, 00:09:38.323 "data_size": 0 00:09:38.323 }, 00:09:38.323 { 00:09:38.323 "name": "BaseBdev3", 00:09:38.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.323 "is_configured": false, 00:09:38.323 "data_offset": 0, 00:09:38.323 "data_size": 0 00:09:38.323 } 00:09:38.323 ] 00:09:38.323 }' 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.323 09:54:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.582 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.582 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.582 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.582 [2024-10-21 09:54:15.151286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.582 BaseBdev2 00:09:38.582 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.582 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:38.582 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:38.582 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.582 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:38.582 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.583 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.583 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.583 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.583 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.583 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.583 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.583 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.583 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.583 [ 00:09:38.583 { 00:09:38.583 "name": "BaseBdev2", 00:09:38.583 "aliases": [ 00:09:38.583 "f9b5f111-0717-4451-98dd-b0d0bb7e17bf" 00:09:38.583 ], 00:09:38.841 "product_name": "Malloc disk", 00:09:38.841 "block_size": 512, 00:09:38.841 "num_blocks": 65536, 00:09:38.841 "uuid": "f9b5f111-0717-4451-98dd-b0d0bb7e17bf", 00:09:38.841 "assigned_rate_limits": { 00:09:38.841 "rw_ios_per_sec": 0, 00:09:38.841 "rw_mbytes_per_sec": 0, 00:09:38.841 "r_mbytes_per_sec": 0, 00:09:38.841 "w_mbytes_per_sec": 0 00:09:38.841 }, 00:09:38.841 "claimed": true, 00:09:38.841 "claim_type": "exclusive_write", 00:09:38.841 "zoned": false, 00:09:38.841 "supported_io_types": { 00:09:38.841 "read": true, 00:09:38.841 "write": true, 00:09:38.841 "unmap": true, 00:09:38.841 "flush": true, 00:09:38.841 "reset": true, 00:09:38.841 "nvme_admin": false, 00:09:38.841 "nvme_io": false, 00:09:38.841 "nvme_io_md": false, 00:09:38.841 "write_zeroes": true, 00:09:38.841 "zcopy": true, 00:09:38.841 "get_zone_info": false, 00:09:38.841 "zone_management": false, 00:09:38.841 "zone_append": false, 00:09:38.841 "compare": false, 00:09:38.841 "compare_and_write": false, 00:09:38.841 "abort": true, 00:09:38.841 "seek_hole": false, 00:09:38.841 "seek_data": false, 00:09:38.841 "copy": true, 00:09:38.841 "nvme_iov_md": false 00:09:38.841 }, 00:09:38.841 "memory_domains": [ 00:09:38.841 { 00:09:38.841 "dma_device_id": "system", 00:09:38.841 "dma_device_type": 1 00:09:38.841 }, 00:09:38.841 { 00:09:38.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.841 "dma_device_type": 2 00:09:38.841 } 00:09:38.841 ], 00:09:38.841 "driver_specific": {} 00:09:38.841 } 00:09:38.841 ] 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.841 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.842 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.842 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.842 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.842 "name": "Existed_Raid", 00:09:38.842 "uuid": "83940bb5-c2a0-4feb-afaf-a456ab90d319", 00:09:38.842 "strip_size_kb": 64, 00:09:38.842 "state": "configuring", 00:09:38.842 "raid_level": "concat", 00:09:38.842 "superblock": true, 00:09:38.842 "num_base_bdevs": 3, 00:09:38.842 "num_base_bdevs_discovered": 2, 00:09:38.842 "num_base_bdevs_operational": 3, 00:09:38.842 "base_bdevs_list": [ 00:09:38.842 { 00:09:38.842 "name": "BaseBdev1", 00:09:38.842 "uuid": "bc060c52-6aea-44a5-a956-a2d76691f617", 00:09:38.842 "is_configured": true, 00:09:38.842 "data_offset": 2048, 00:09:38.842 "data_size": 63488 00:09:38.842 }, 00:09:38.842 { 00:09:38.842 "name": "BaseBdev2", 00:09:38.842 "uuid": "f9b5f111-0717-4451-98dd-b0d0bb7e17bf", 00:09:38.842 "is_configured": true, 00:09:38.842 "data_offset": 2048, 00:09:38.842 "data_size": 63488 00:09:38.842 }, 00:09:38.842 { 00:09:38.842 "name": "BaseBdev3", 00:09:38.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.842 "is_configured": false, 00:09:38.842 "data_offset": 0, 00:09:38.842 "data_size": 0 00:09:38.842 } 00:09:38.842 ] 00:09:38.842 }' 00:09:38.842 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.842 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.102 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.102 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.102 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.103 [2024-10-21 09:54:15.610411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.103 [2024-10-21 09:54:15.610789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:39.103 BaseBdev3 00:09:39.103 [2024-10-21 09:54:15.610852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:39.103 [2024-10-21 09:54:15.611127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:39.103 [2024-10-21 09:54:15.611292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:39.103 [2024-10-21 09:54:15.611302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:09:39.103 [2024-10-21 09:54:15.611446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.103 [ 00:09:39.103 { 00:09:39.103 "name": "BaseBdev3", 00:09:39.103 "aliases": [ 00:09:39.103 "07952b50-4ff3-440a-b69e-349c3bf79004" 00:09:39.103 ], 00:09:39.103 "product_name": "Malloc disk", 00:09:39.103 "block_size": 512, 00:09:39.103 "num_blocks": 65536, 00:09:39.103 "uuid": "07952b50-4ff3-440a-b69e-349c3bf79004", 00:09:39.103 "assigned_rate_limits": { 00:09:39.103 "rw_ios_per_sec": 0, 00:09:39.103 "rw_mbytes_per_sec": 0, 00:09:39.103 "r_mbytes_per_sec": 0, 00:09:39.103 "w_mbytes_per_sec": 0 00:09:39.103 }, 00:09:39.103 "claimed": true, 00:09:39.103 "claim_type": "exclusive_write", 00:09:39.103 "zoned": false, 00:09:39.103 "supported_io_types": { 00:09:39.103 "read": true, 00:09:39.103 "write": true, 00:09:39.103 "unmap": true, 00:09:39.103 "flush": true, 00:09:39.103 "reset": true, 00:09:39.103 "nvme_admin": false, 00:09:39.103 "nvme_io": false, 00:09:39.103 "nvme_io_md": false, 00:09:39.103 "write_zeroes": true, 00:09:39.103 "zcopy": true, 00:09:39.103 "get_zone_info": false, 00:09:39.103 "zone_management": false, 00:09:39.103 "zone_append": false, 00:09:39.103 "compare": false, 00:09:39.103 "compare_and_write": false, 00:09:39.103 "abort": true, 00:09:39.103 "seek_hole": false, 00:09:39.103 "seek_data": false, 00:09:39.103 "copy": true, 00:09:39.103 "nvme_iov_md": false 00:09:39.103 }, 00:09:39.103 "memory_domains": [ 00:09:39.103 { 00:09:39.103 "dma_device_id": "system", 00:09:39.103 "dma_device_type": 1 00:09:39.103 }, 00:09:39.103 { 00:09:39.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.103 "dma_device_type": 2 00:09:39.103 } 00:09:39.103 ], 00:09:39.103 "driver_specific": {} 00:09:39.103 } 00:09:39.103 ] 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.103 "name": "Existed_Raid", 00:09:39.103 "uuid": "83940bb5-c2a0-4feb-afaf-a456ab90d319", 00:09:39.103 "strip_size_kb": 64, 00:09:39.103 "state": "online", 00:09:39.103 "raid_level": "concat", 00:09:39.103 "superblock": true, 00:09:39.103 "num_base_bdevs": 3, 00:09:39.103 "num_base_bdevs_discovered": 3, 00:09:39.103 "num_base_bdevs_operational": 3, 00:09:39.103 "base_bdevs_list": [ 00:09:39.103 { 00:09:39.103 "name": "BaseBdev1", 00:09:39.103 "uuid": "bc060c52-6aea-44a5-a956-a2d76691f617", 00:09:39.103 "is_configured": true, 00:09:39.103 "data_offset": 2048, 00:09:39.103 "data_size": 63488 00:09:39.103 }, 00:09:39.103 { 00:09:39.103 "name": "BaseBdev2", 00:09:39.103 "uuid": "f9b5f111-0717-4451-98dd-b0d0bb7e17bf", 00:09:39.103 "is_configured": true, 00:09:39.103 "data_offset": 2048, 00:09:39.103 "data_size": 63488 00:09:39.103 }, 00:09:39.103 { 00:09:39.103 "name": "BaseBdev3", 00:09:39.103 "uuid": "07952b50-4ff3-440a-b69e-349c3bf79004", 00:09:39.103 "is_configured": true, 00:09:39.103 "data_offset": 2048, 00:09:39.103 "data_size": 63488 00:09:39.103 } 00:09:39.103 ] 00:09:39.103 }' 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.103 09:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.673 [2024-10-21 09:54:16.082237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.673 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.673 "name": "Existed_Raid", 00:09:39.673 "aliases": [ 00:09:39.673 "83940bb5-c2a0-4feb-afaf-a456ab90d319" 00:09:39.673 ], 00:09:39.673 "product_name": "Raid Volume", 00:09:39.673 "block_size": 512, 00:09:39.673 "num_blocks": 190464, 00:09:39.673 "uuid": "83940bb5-c2a0-4feb-afaf-a456ab90d319", 00:09:39.673 "assigned_rate_limits": { 00:09:39.673 "rw_ios_per_sec": 0, 00:09:39.673 "rw_mbytes_per_sec": 0, 00:09:39.673 "r_mbytes_per_sec": 0, 00:09:39.673 "w_mbytes_per_sec": 0 00:09:39.673 }, 00:09:39.673 "claimed": false, 00:09:39.673 "zoned": false, 00:09:39.673 "supported_io_types": { 00:09:39.673 "read": true, 00:09:39.673 "write": true, 00:09:39.673 "unmap": true, 00:09:39.673 "flush": true, 00:09:39.673 "reset": true, 00:09:39.673 "nvme_admin": false, 00:09:39.673 "nvme_io": false, 00:09:39.673 "nvme_io_md": false, 00:09:39.673 "write_zeroes": true, 00:09:39.673 "zcopy": false, 00:09:39.673 "get_zone_info": false, 00:09:39.673 "zone_management": false, 00:09:39.673 "zone_append": false, 00:09:39.673 "compare": false, 00:09:39.673 "compare_and_write": false, 00:09:39.673 "abort": false, 00:09:39.673 "seek_hole": false, 00:09:39.673 "seek_data": false, 00:09:39.673 "copy": false, 00:09:39.673 "nvme_iov_md": false 00:09:39.673 }, 00:09:39.673 "memory_domains": [ 00:09:39.673 { 00:09:39.673 "dma_device_id": "system", 00:09:39.673 "dma_device_type": 1 00:09:39.673 }, 00:09:39.673 { 00:09:39.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.673 "dma_device_type": 2 00:09:39.673 }, 00:09:39.673 { 00:09:39.673 "dma_device_id": "system", 00:09:39.673 "dma_device_type": 1 00:09:39.673 }, 00:09:39.673 { 00:09:39.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.673 "dma_device_type": 2 00:09:39.673 }, 00:09:39.673 { 00:09:39.673 "dma_device_id": "system", 00:09:39.673 "dma_device_type": 1 00:09:39.673 }, 00:09:39.673 { 00:09:39.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.673 "dma_device_type": 2 00:09:39.673 } 00:09:39.673 ], 00:09:39.673 "driver_specific": { 00:09:39.673 "raid": { 00:09:39.673 "uuid": "83940bb5-c2a0-4feb-afaf-a456ab90d319", 00:09:39.673 "strip_size_kb": 64, 00:09:39.673 "state": "online", 00:09:39.673 "raid_level": "concat", 00:09:39.673 "superblock": true, 00:09:39.673 "num_base_bdevs": 3, 00:09:39.673 "num_base_bdevs_discovered": 3, 00:09:39.673 "num_base_bdevs_operational": 3, 00:09:39.673 "base_bdevs_list": [ 00:09:39.673 { 00:09:39.673 "name": "BaseBdev1", 00:09:39.673 "uuid": "bc060c52-6aea-44a5-a956-a2d76691f617", 00:09:39.673 "is_configured": true, 00:09:39.673 "data_offset": 2048, 00:09:39.673 "data_size": 63488 00:09:39.673 }, 00:09:39.673 { 00:09:39.673 "name": "BaseBdev2", 00:09:39.673 "uuid": "f9b5f111-0717-4451-98dd-b0d0bb7e17bf", 00:09:39.673 "is_configured": true, 00:09:39.673 "data_offset": 2048, 00:09:39.673 "data_size": 63488 00:09:39.673 }, 00:09:39.673 { 00:09:39.673 "name": "BaseBdev3", 00:09:39.673 "uuid": "07952b50-4ff3-440a-b69e-349c3bf79004", 00:09:39.673 "is_configured": true, 00:09:39.673 "data_offset": 2048, 00:09:39.673 "data_size": 63488 00:09:39.673 } 00:09:39.673 ] 00:09:39.673 } 00:09:39.673 } 00:09:39.673 }' 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:39.674 BaseBdev2 00:09:39.674 BaseBdev3' 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.674 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.934 [2024-10-21 09:54:16.353308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.934 [2024-10-21 09:54:16.353356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.934 [2024-10-21 09:54:16.353422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.934 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.935 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.935 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.935 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.935 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.935 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.935 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.935 "name": "Existed_Raid", 00:09:39.935 "uuid": "83940bb5-c2a0-4feb-afaf-a456ab90d319", 00:09:39.935 "strip_size_kb": 64, 00:09:39.935 "state": "offline", 00:09:39.935 "raid_level": "concat", 00:09:39.935 "superblock": true, 00:09:39.935 "num_base_bdevs": 3, 00:09:39.935 "num_base_bdevs_discovered": 2, 00:09:39.935 "num_base_bdevs_operational": 2, 00:09:39.935 "base_bdevs_list": [ 00:09:39.935 { 00:09:39.935 "name": null, 00:09:39.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.935 "is_configured": false, 00:09:39.935 "data_offset": 0, 00:09:39.935 "data_size": 63488 00:09:39.935 }, 00:09:39.935 { 00:09:39.935 "name": "BaseBdev2", 00:09:39.935 "uuid": "f9b5f111-0717-4451-98dd-b0d0bb7e17bf", 00:09:39.935 "is_configured": true, 00:09:39.935 "data_offset": 2048, 00:09:39.935 "data_size": 63488 00:09:39.935 }, 00:09:39.935 { 00:09:39.935 "name": "BaseBdev3", 00:09:39.935 "uuid": "07952b50-4ff3-440a-b69e-349c3bf79004", 00:09:39.935 "is_configured": true, 00:09:39.935 "data_offset": 2048, 00:09:39.935 "data_size": 63488 00:09:39.935 } 00:09:39.935 ] 00:09:39.935 }' 00:09:39.935 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.935 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.505 09:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.505 [2024-10-21 09:54:16.962322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.505 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.505 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.505 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.505 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.505 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.505 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.505 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.505 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.765 [2024-10-21 09:54:17.119674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:40.765 [2024-10-21 09:54:17.119836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.765 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.765 BaseBdev2 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.766 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.766 [ 00:09:40.766 { 00:09:40.766 "name": "BaseBdev2", 00:09:40.766 "aliases": [ 00:09:40.766 "ac9921dd-deab-477c-8ded-4ade3214dda4" 00:09:40.766 ], 00:09:40.766 "product_name": "Malloc disk", 00:09:40.766 "block_size": 512, 00:09:40.766 "num_blocks": 65536, 00:09:40.766 "uuid": "ac9921dd-deab-477c-8ded-4ade3214dda4", 00:09:40.766 "assigned_rate_limits": { 00:09:40.766 "rw_ios_per_sec": 0, 00:09:40.766 "rw_mbytes_per_sec": 0, 00:09:40.766 "r_mbytes_per_sec": 0, 00:09:40.766 "w_mbytes_per_sec": 0 00:09:40.766 }, 00:09:40.766 "claimed": false, 00:09:40.766 "zoned": false, 00:09:40.766 "supported_io_types": { 00:09:40.766 "read": true, 00:09:40.766 "write": true, 00:09:40.766 "unmap": true, 00:09:40.766 "flush": true, 00:09:40.766 "reset": true, 00:09:40.766 "nvme_admin": false, 00:09:40.766 "nvme_io": false, 00:09:40.766 "nvme_io_md": false, 00:09:40.766 "write_zeroes": true, 00:09:40.766 "zcopy": true, 00:09:40.766 "get_zone_info": false, 00:09:40.766 "zone_management": false, 00:09:40.766 "zone_append": false, 00:09:40.766 "compare": false, 00:09:40.766 "compare_and_write": false, 00:09:40.766 "abort": true, 00:09:40.766 "seek_hole": false, 00:09:40.766 "seek_data": false, 00:09:40.766 "copy": true, 00:09:40.766 "nvme_iov_md": false 00:09:40.766 }, 00:09:40.766 "memory_domains": [ 00:09:40.766 { 00:09:41.044 "dma_device_id": "system", 00:09:41.044 "dma_device_type": 1 00:09:41.044 }, 00:09:41.044 { 00:09:41.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.044 "dma_device_type": 2 00:09:41.044 } 00:09:41.044 ], 00:09:41.044 "driver_specific": {} 00:09:41.044 } 00:09:41.044 ] 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 BaseBdev3 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 [ 00:09:41.044 { 00:09:41.044 "name": "BaseBdev3", 00:09:41.044 "aliases": [ 00:09:41.044 "5cebb6d5-45f0-451c-8338-0a16624a8b62" 00:09:41.044 ], 00:09:41.044 "product_name": "Malloc disk", 00:09:41.044 "block_size": 512, 00:09:41.044 "num_blocks": 65536, 00:09:41.044 "uuid": "5cebb6d5-45f0-451c-8338-0a16624a8b62", 00:09:41.044 "assigned_rate_limits": { 00:09:41.044 "rw_ios_per_sec": 0, 00:09:41.044 "rw_mbytes_per_sec": 0, 00:09:41.044 "r_mbytes_per_sec": 0, 00:09:41.044 "w_mbytes_per_sec": 0 00:09:41.044 }, 00:09:41.044 "claimed": false, 00:09:41.044 "zoned": false, 00:09:41.044 "supported_io_types": { 00:09:41.044 "read": true, 00:09:41.044 "write": true, 00:09:41.044 "unmap": true, 00:09:41.044 "flush": true, 00:09:41.044 "reset": true, 00:09:41.044 "nvme_admin": false, 00:09:41.044 "nvme_io": false, 00:09:41.044 "nvme_io_md": false, 00:09:41.044 "write_zeroes": true, 00:09:41.044 "zcopy": true, 00:09:41.044 "get_zone_info": false, 00:09:41.044 "zone_management": false, 00:09:41.044 "zone_append": false, 00:09:41.044 "compare": false, 00:09:41.044 "compare_and_write": false, 00:09:41.044 "abort": true, 00:09:41.044 "seek_hole": false, 00:09:41.044 "seek_data": false, 00:09:41.044 "copy": true, 00:09:41.044 "nvme_iov_md": false 00:09:41.044 }, 00:09:41.044 "memory_domains": [ 00:09:41.044 { 00:09:41.044 "dma_device_id": "system", 00:09:41.044 "dma_device_type": 1 00:09:41.044 }, 00:09:41.044 { 00:09:41.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.044 "dma_device_type": 2 00:09:41.044 } 00:09:41.044 ], 00:09:41.044 "driver_specific": {} 00:09:41.044 } 00:09:41.044 ] 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 [2024-10-21 09:54:17.457309] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.044 [2024-10-21 09:54:17.457449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.044 [2024-10-21 09:54:17.457494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.044 [2024-10-21 09:54:17.459654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.044 "name": "Existed_Raid", 00:09:41.045 "uuid": "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb", 00:09:41.045 "strip_size_kb": 64, 00:09:41.045 "state": "configuring", 00:09:41.045 "raid_level": "concat", 00:09:41.045 "superblock": true, 00:09:41.045 "num_base_bdevs": 3, 00:09:41.045 "num_base_bdevs_discovered": 2, 00:09:41.045 "num_base_bdevs_operational": 3, 00:09:41.045 "base_bdevs_list": [ 00:09:41.045 { 00:09:41.045 "name": "BaseBdev1", 00:09:41.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.045 "is_configured": false, 00:09:41.045 "data_offset": 0, 00:09:41.045 "data_size": 0 00:09:41.045 }, 00:09:41.045 { 00:09:41.045 "name": "BaseBdev2", 00:09:41.045 "uuid": "ac9921dd-deab-477c-8ded-4ade3214dda4", 00:09:41.045 "is_configured": true, 00:09:41.045 "data_offset": 2048, 00:09:41.045 "data_size": 63488 00:09:41.045 }, 00:09:41.045 { 00:09:41.045 "name": "BaseBdev3", 00:09:41.045 "uuid": "5cebb6d5-45f0-451c-8338-0a16624a8b62", 00:09:41.045 "is_configured": true, 00:09:41.045 "data_offset": 2048, 00:09:41.045 "data_size": 63488 00:09:41.045 } 00:09:41.045 ] 00:09:41.045 }' 00:09:41.045 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.045 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.308 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:41.308 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.308 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.568 [2024-10-21 09:54:17.904520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.568 "name": "Existed_Raid", 00:09:41.568 "uuid": "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb", 00:09:41.568 "strip_size_kb": 64, 00:09:41.568 "state": "configuring", 00:09:41.568 "raid_level": "concat", 00:09:41.568 "superblock": true, 00:09:41.568 "num_base_bdevs": 3, 00:09:41.568 "num_base_bdevs_discovered": 1, 00:09:41.568 "num_base_bdevs_operational": 3, 00:09:41.568 "base_bdevs_list": [ 00:09:41.568 { 00:09:41.568 "name": "BaseBdev1", 00:09:41.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.568 "is_configured": false, 00:09:41.568 "data_offset": 0, 00:09:41.568 "data_size": 0 00:09:41.568 }, 00:09:41.568 { 00:09:41.568 "name": null, 00:09:41.568 "uuid": "ac9921dd-deab-477c-8ded-4ade3214dda4", 00:09:41.568 "is_configured": false, 00:09:41.568 "data_offset": 0, 00:09:41.568 "data_size": 63488 00:09:41.568 }, 00:09:41.568 { 00:09:41.568 "name": "BaseBdev3", 00:09:41.568 "uuid": "5cebb6d5-45f0-451c-8338-0a16624a8b62", 00:09:41.568 "is_configured": true, 00:09:41.568 "data_offset": 2048, 00:09:41.568 "data_size": 63488 00:09:41.568 } 00:09:41.568 ] 00:09:41.568 }' 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.568 09:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.827 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.827 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.827 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.827 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.827 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.827 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:41.827 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.827 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.828 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.087 [2024-10-21 09:54:18.433817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.087 BaseBdev1 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.087 [ 00:09:42.087 { 00:09:42.087 "name": "BaseBdev1", 00:09:42.087 "aliases": [ 00:09:42.087 "9c4ded32-d7bc-4a5e-a439-23bd933b59ec" 00:09:42.087 ], 00:09:42.087 "product_name": "Malloc disk", 00:09:42.087 "block_size": 512, 00:09:42.087 "num_blocks": 65536, 00:09:42.087 "uuid": "9c4ded32-d7bc-4a5e-a439-23bd933b59ec", 00:09:42.087 "assigned_rate_limits": { 00:09:42.087 "rw_ios_per_sec": 0, 00:09:42.087 "rw_mbytes_per_sec": 0, 00:09:42.087 "r_mbytes_per_sec": 0, 00:09:42.087 "w_mbytes_per_sec": 0 00:09:42.087 }, 00:09:42.087 "claimed": true, 00:09:42.087 "claim_type": "exclusive_write", 00:09:42.087 "zoned": false, 00:09:42.087 "supported_io_types": { 00:09:42.087 "read": true, 00:09:42.087 "write": true, 00:09:42.087 "unmap": true, 00:09:42.087 "flush": true, 00:09:42.087 "reset": true, 00:09:42.087 "nvme_admin": false, 00:09:42.087 "nvme_io": false, 00:09:42.087 "nvme_io_md": false, 00:09:42.087 "write_zeroes": true, 00:09:42.087 "zcopy": true, 00:09:42.087 "get_zone_info": false, 00:09:42.087 "zone_management": false, 00:09:42.087 "zone_append": false, 00:09:42.087 "compare": false, 00:09:42.087 "compare_and_write": false, 00:09:42.087 "abort": true, 00:09:42.087 "seek_hole": false, 00:09:42.087 "seek_data": false, 00:09:42.087 "copy": true, 00:09:42.087 "nvme_iov_md": false 00:09:42.087 }, 00:09:42.087 "memory_domains": [ 00:09:42.087 { 00:09:42.087 "dma_device_id": "system", 00:09:42.087 "dma_device_type": 1 00:09:42.087 }, 00:09:42.087 { 00:09:42.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.087 "dma_device_type": 2 00:09:42.087 } 00:09:42.087 ], 00:09:42.087 "driver_specific": {} 00:09:42.087 } 00:09:42.087 ] 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.087 "name": "Existed_Raid", 00:09:42.087 "uuid": "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb", 00:09:42.087 "strip_size_kb": 64, 00:09:42.087 "state": "configuring", 00:09:42.087 "raid_level": "concat", 00:09:42.087 "superblock": true, 00:09:42.087 "num_base_bdevs": 3, 00:09:42.087 "num_base_bdevs_discovered": 2, 00:09:42.087 "num_base_bdevs_operational": 3, 00:09:42.087 "base_bdevs_list": [ 00:09:42.087 { 00:09:42.087 "name": "BaseBdev1", 00:09:42.087 "uuid": "9c4ded32-d7bc-4a5e-a439-23bd933b59ec", 00:09:42.087 "is_configured": true, 00:09:42.087 "data_offset": 2048, 00:09:42.087 "data_size": 63488 00:09:42.087 }, 00:09:42.087 { 00:09:42.087 "name": null, 00:09:42.087 "uuid": "ac9921dd-deab-477c-8ded-4ade3214dda4", 00:09:42.087 "is_configured": false, 00:09:42.087 "data_offset": 0, 00:09:42.087 "data_size": 63488 00:09:42.087 }, 00:09:42.087 { 00:09:42.087 "name": "BaseBdev3", 00:09:42.087 "uuid": "5cebb6d5-45f0-451c-8338-0a16624a8b62", 00:09:42.087 "is_configured": true, 00:09:42.087 "data_offset": 2048, 00:09:42.087 "data_size": 63488 00:09:42.087 } 00:09:42.087 ] 00:09:42.087 }' 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.087 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.345 [2024-10-21 09:54:18.921040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.345 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.346 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.605 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.605 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.605 "name": "Existed_Raid", 00:09:42.605 "uuid": "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb", 00:09:42.605 "strip_size_kb": 64, 00:09:42.605 "state": "configuring", 00:09:42.605 "raid_level": "concat", 00:09:42.605 "superblock": true, 00:09:42.605 "num_base_bdevs": 3, 00:09:42.605 "num_base_bdevs_discovered": 1, 00:09:42.605 "num_base_bdevs_operational": 3, 00:09:42.605 "base_bdevs_list": [ 00:09:42.605 { 00:09:42.605 "name": "BaseBdev1", 00:09:42.605 "uuid": "9c4ded32-d7bc-4a5e-a439-23bd933b59ec", 00:09:42.605 "is_configured": true, 00:09:42.605 "data_offset": 2048, 00:09:42.605 "data_size": 63488 00:09:42.605 }, 00:09:42.605 { 00:09:42.605 "name": null, 00:09:42.605 "uuid": "ac9921dd-deab-477c-8ded-4ade3214dda4", 00:09:42.605 "is_configured": false, 00:09:42.605 "data_offset": 0, 00:09:42.605 "data_size": 63488 00:09:42.605 }, 00:09:42.605 { 00:09:42.605 "name": null, 00:09:42.605 "uuid": "5cebb6d5-45f0-451c-8338-0a16624a8b62", 00:09:42.605 "is_configured": false, 00:09:42.605 "data_offset": 0, 00:09:42.605 "data_size": 63488 00:09:42.605 } 00:09:42.605 ] 00:09:42.605 }' 00:09:42.605 09:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.605 09:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.863 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.863 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.863 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.863 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.863 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.863 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:42.863 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.864 [2024-10-21 09:54:19.384342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.864 "name": "Existed_Raid", 00:09:42.864 "uuid": "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb", 00:09:42.864 "strip_size_kb": 64, 00:09:42.864 "state": "configuring", 00:09:42.864 "raid_level": "concat", 00:09:42.864 "superblock": true, 00:09:42.864 "num_base_bdevs": 3, 00:09:42.864 "num_base_bdevs_discovered": 2, 00:09:42.864 "num_base_bdevs_operational": 3, 00:09:42.864 "base_bdevs_list": [ 00:09:42.864 { 00:09:42.864 "name": "BaseBdev1", 00:09:42.864 "uuid": "9c4ded32-d7bc-4a5e-a439-23bd933b59ec", 00:09:42.864 "is_configured": true, 00:09:42.864 "data_offset": 2048, 00:09:42.864 "data_size": 63488 00:09:42.864 }, 00:09:42.864 { 00:09:42.864 "name": null, 00:09:42.864 "uuid": "ac9921dd-deab-477c-8ded-4ade3214dda4", 00:09:42.864 "is_configured": false, 00:09:42.864 "data_offset": 0, 00:09:42.864 "data_size": 63488 00:09:42.864 }, 00:09:42.864 { 00:09:42.864 "name": "BaseBdev3", 00:09:42.864 "uuid": "5cebb6d5-45f0-451c-8338-0a16624a8b62", 00:09:42.864 "is_configured": true, 00:09:42.864 "data_offset": 2048, 00:09:42.864 "data_size": 63488 00:09:42.864 } 00:09:42.864 ] 00:09:42.864 }' 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.864 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.432 [2024-10-21 09:54:19.855644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.432 09:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.432 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.432 "name": "Existed_Raid", 00:09:43.432 "uuid": "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb", 00:09:43.432 "strip_size_kb": 64, 00:09:43.432 "state": "configuring", 00:09:43.432 "raid_level": "concat", 00:09:43.432 "superblock": true, 00:09:43.432 "num_base_bdevs": 3, 00:09:43.432 "num_base_bdevs_discovered": 1, 00:09:43.433 "num_base_bdevs_operational": 3, 00:09:43.433 "base_bdevs_list": [ 00:09:43.433 { 00:09:43.433 "name": null, 00:09:43.433 "uuid": "9c4ded32-d7bc-4a5e-a439-23bd933b59ec", 00:09:43.433 "is_configured": false, 00:09:43.433 "data_offset": 0, 00:09:43.433 "data_size": 63488 00:09:43.433 }, 00:09:43.433 { 00:09:43.433 "name": null, 00:09:43.433 "uuid": "ac9921dd-deab-477c-8ded-4ade3214dda4", 00:09:43.433 "is_configured": false, 00:09:43.433 "data_offset": 0, 00:09:43.433 "data_size": 63488 00:09:43.433 }, 00:09:43.433 { 00:09:43.433 "name": "BaseBdev3", 00:09:43.433 "uuid": "5cebb6d5-45f0-451c-8338-0a16624a8b62", 00:09:43.433 "is_configured": true, 00:09:43.433 "data_offset": 2048, 00:09:43.433 "data_size": 63488 00:09:43.433 } 00:09:43.433 ] 00:09:43.433 }' 00:09:43.433 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.433 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.002 [2024-10-21 09:54:20.406273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.002 "name": "Existed_Raid", 00:09:44.002 "uuid": "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb", 00:09:44.002 "strip_size_kb": 64, 00:09:44.002 "state": "configuring", 00:09:44.002 "raid_level": "concat", 00:09:44.002 "superblock": true, 00:09:44.002 "num_base_bdevs": 3, 00:09:44.002 "num_base_bdevs_discovered": 2, 00:09:44.002 "num_base_bdevs_operational": 3, 00:09:44.002 "base_bdevs_list": [ 00:09:44.002 { 00:09:44.002 "name": null, 00:09:44.002 "uuid": "9c4ded32-d7bc-4a5e-a439-23bd933b59ec", 00:09:44.002 "is_configured": false, 00:09:44.002 "data_offset": 0, 00:09:44.002 "data_size": 63488 00:09:44.002 }, 00:09:44.002 { 00:09:44.002 "name": "BaseBdev2", 00:09:44.002 "uuid": "ac9921dd-deab-477c-8ded-4ade3214dda4", 00:09:44.002 "is_configured": true, 00:09:44.002 "data_offset": 2048, 00:09:44.002 "data_size": 63488 00:09:44.002 }, 00:09:44.002 { 00:09:44.002 "name": "BaseBdev3", 00:09:44.002 "uuid": "5cebb6d5-45f0-451c-8338-0a16624a8b62", 00:09:44.002 "is_configured": true, 00:09:44.002 "data_offset": 2048, 00:09:44.002 "data_size": 63488 00:09:44.002 } 00:09:44.002 ] 00:09:44.002 }' 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.002 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.262 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.263 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.263 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.263 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:44.263 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.522 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:44.522 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.522 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.522 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9c4ded32-d7bc-4a5e-a439-23bd933b59ec 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.523 [2024-10-21 09:54:20.978680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:44.523 [2024-10-21 09:54:20.978932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:09:44.523 [2024-10-21 09:54:20.978950] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:44.523 [2024-10-21 09:54:20.979223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:44.523 [2024-10-21 09:54:20.979386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:09:44.523 [2024-10-21 09:54:20.979398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:09:44.523 [2024-10-21 09:54:20.979545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.523 NewBaseBdev 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.523 09:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.523 [ 00:09:44.523 { 00:09:44.523 "name": "NewBaseBdev", 00:09:44.523 "aliases": [ 00:09:44.523 "9c4ded32-d7bc-4a5e-a439-23bd933b59ec" 00:09:44.523 ], 00:09:44.523 "product_name": "Malloc disk", 00:09:44.523 "block_size": 512, 00:09:44.523 "num_blocks": 65536, 00:09:44.523 "uuid": "9c4ded32-d7bc-4a5e-a439-23bd933b59ec", 00:09:44.523 "assigned_rate_limits": { 00:09:44.523 "rw_ios_per_sec": 0, 00:09:44.523 "rw_mbytes_per_sec": 0, 00:09:44.523 "r_mbytes_per_sec": 0, 00:09:44.523 "w_mbytes_per_sec": 0 00:09:44.523 }, 00:09:44.523 "claimed": true, 00:09:44.523 "claim_type": "exclusive_write", 00:09:44.523 "zoned": false, 00:09:44.523 "supported_io_types": { 00:09:44.523 "read": true, 00:09:44.523 "write": true, 00:09:44.523 "unmap": true, 00:09:44.523 "flush": true, 00:09:44.523 "reset": true, 00:09:44.523 "nvme_admin": false, 00:09:44.523 "nvme_io": false, 00:09:44.523 "nvme_io_md": false, 00:09:44.523 "write_zeroes": true, 00:09:44.523 "zcopy": true, 00:09:44.523 "get_zone_info": false, 00:09:44.523 "zone_management": false, 00:09:44.523 "zone_append": false, 00:09:44.523 "compare": false, 00:09:44.523 "compare_and_write": false, 00:09:44.523 "abort": true, 00:09:44.523 "seek_hole": false, 00:09:44.523 "seek_data": false, 00:09:44.523 "copy": true, 00:09:44.523 "nvme_iov_md": false 00:09:44.523 }, 00:09:44.523 "memory_domains": [ 00:09:44.523 { 00:09:44.523 "dma_device_id": "system", 00:09:44.523 "dma_device_type": 1 00:09:44.523 }, 00:09:44.523 { 00:09:44.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.523 "dma_device_type": 2 00:09:44.523 } 00:09:44.523 ], 00:09:44.523 "driver_specific": {} 00:09:44.523 } 00:09:44.523 ] 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.523 "name": "Existed_Raid", 00:09:44.523 "uuid": "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb", 00:09:44.523 "strip_size_kb": 64, 00:09:44.523 "state": "online", 00:09:44.523 "raid_level": "concat", 00:09:44.523 "superblock": true, 00:09:44.523 "num_base_bdevs": 3, 00:09:44.523 "num_base_bdevs_discovered": 3, 00:09:44.523 "num_base_bdevs_operational": 3, 00:09:44.523 "base_bdevs_list": [ 00:09:44.523 { 00:09:44.523 "name": "NewBaseBdev", 00:09:44.523 "uuid": "9c4ded32-d7bc-4a5e-a439-23bd933b59ec", 00:09:44.523 "is_configured": true, 00:09:44.523 "data_offset": 2048, 00:09:44.523 "data_size": 63488 00:09:44.523 }, 00:09:44.523 { 00:09:44.523 "name": "BaseBdev2", 00:09:44.523 "uuid": "ac9921dd-deab-477c-8ded-4ade3214dda4", 00:09:44.523 "is_configured": true, 00:09:44.523 "data_offset": 2048, 00:09:44.523 "data_size": 63488 00:09:44.523 }, 00:09:44.523 { 00:09:44.523 "name": "BaseBdev3", 00:09:44.523 "uuid": "5cebb6d5-45f0-451c-8338-0a16624a8b62", 00:09:44.523 "is_configured": true, 00:09:44.523 "data_offset": 2048, 00:09:44.523 "data_size": 63488 00:09:44.523 } 00:09:44.523 ] 00:09:44.523 }' 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.523 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.204 [2024-10-21 09:54:21.494398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.204 "name": "Existed_Raid", 00:09:45.204 "aliases": [ 00:09:45.204 "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb" 00:09:45.204 ], 00:09:45.204 "product_name": "Raid Volume", 00:09:45.204 "block_size": 512, 00:09:45.204 "num_blocks": 190464, 00:09:45.204 "uuid": "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb", 00:09:45.204 "assigned_rate_limits": { 00:09:45.204 "rw_ios_per_sec": 0, 00:09:45.204 "rw_mbytes_per_sec": 0, 00:09:45.204 "r_mbytes_per_sec": 0, 00:09:45.204 "w_mbytes_per_sec": 0 00:09:45.204 }, 00:09:45.204 "claimed": false, 00:09:45.204 "zoned": false, 00:09:45.204 "supported_io_types": { 00:09:45.204 "read": true, 00:09:45.204 "write": true, 00:09:45.204 "unmap": true, 00:09:45.204 "flush": true, 00:09:45.204 "reset": true, 00:09:45.204 "nvme_admin": false, 00:09:45.204 "nvme_io": false, 00:09:45.204 "nvme_io_md": false, 00:09:45.204 "write_zeroes": true, 00:09:45.204 "zcopy": false, 00:09:45.204 "get_zone_info": false, 00:09:45.204 "zone_management": false, 00:09:45.204 "zone_append": false, 00:09:45.204 "compare": false, 00:09:45.204 "compare_and_write": false, 00:09:45.204 "abort": false, 00:09:45.204 "seek_hole": false, 00:09:45.204 "seek_data": false, 00:09:45.204 "copy": false, 00:09:45.204 "nvme_iov_md": false 00:09:45.204 }, 00:09:45.204 "memory_domains": [ 00:09:45.204 { 00:09:45.204 "dma_device_id": "system", 00:09:45.204 "dma_device_type": 1 00:09:45.204 }, 00:09:45.204 { 00:09:45.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.204 "dma_device_type": 2 00:09:45.204 }, 00:09:45.204 { 00:09:45.204 "dma_device_id": "system", 00:09:45.204 "dma_device_type": 1 00:09:45.204 }, 00:09:45.204 { 00:09:45.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.204 "dma_device_type": 2 00:09:45.204 }, 00:09:45.204 { 00:09:45.204 "dma_device_id": "system", 00:09:45.204 "dma_device_type": 1 00:09:45.204 }, 00:09:45.204 { 00:09:45.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.204 "dma_device_type": 2 00:09:45.204 } 00:09:45.204 ], 00:09:45.204 "driver_specific": { 00:09:45.204 "raid": { 00:09:45.204 "uuid": "13eb47ad-29cd-4aa8-bb87-7f561ab1fcbb", 00:09:45.204 "strip_size_kb": 64, 00:09:45.204 "state": "online", 00:09:45.204 "raid_level": "concat", 00:09:45.204 "superblock": true, 00:09:45.204 "num_base_bdevs": 3, 00:09:45.204 "num_base_bdevs_discovered": 3, 00:09:45.204 "num_base_bdevs_operational": 3, 00:09:45.204 "base_bdevs_list": [ 00:09:45.204 { 00:09:45.204 "name": "NewBaseBdev", 00:09:45.204 "uuid": "9c4ded32-d7bc-4a5e-a439-23bd933b59ec", 00:09:45.204 "is_configured": true, 00:09:45.204 "data_offset": 2048, 00:09:45.204 "data_size": 63488 00:09:45.204 }, 00:09:45.204 { 00:09:45.204 "name": "BaseBdev2", 00:09:45.204 "uuid": "ac9921dd-deab-477c-8ded-4ade3214dda4", 00:09:45.204 "is_configured": true, 00:09:45.204 "data_offset": 2048, 00:09:45.204 "data_size": 63488 00:09:45.204 }, 00:09:45.204 { 00:09:45.204 "name": "BaseBdev3", 00:09:45.204 "uuid": "5cebb6d5-45f0-451c-8338-0a16624a8b62", 00:09:45.204 "is_configured": true, 00:09:45.204 "data_offset": 2048, 00:09:45.204 "data_size": 63488 00:09:45.204 } 00:09:45.204 ] 00:09:45.204 } 00:09:45.204 } 00:09:45.204 }' 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:45.204 BaseBdev2 00:09:45.204 BaseBdev3' 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.204 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.205 [2024-10-21 09:54:21.693668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.205 [2024-10-21 09:54:21.693709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.205 [2024-10-21 09:54:21.693790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.205 [2024-10-21 09:54:21.693851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.205 [2024-10-21 09:54:21.693865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 65813 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 65813 ']' 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 65813 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65813 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.205 killing process with pid 65813 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65813' 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 65813 00:09:45.205 [2024-10-21 09:54:21.738124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.205 09:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 65813 00:09:45.479 [2024-10-21 09:54:22.053479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.879 09:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:46.879 00:09:46.879 real 0m10.476s 00:09:46.879 user 0m16.455s 00:09:46.879 sys 0m1.896s 00:09:46.879 09:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.879 09:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 ************************************ 00:09:46.879 END TEST raid_state_function_test_sb 00:09:46.879 ************************************ 00:09:46.879 09:54:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:46.879 09:54:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:46.879 09:54:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.879 09:54:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 ************************************ 00:09:46.879 START TEST raid_superblock_test 00:09:46.879 ************************************ 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66428 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66428 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66428 ']' 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.879 09:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 [2024-10-21 09:54:23.422462] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:46.879 [2024-10-21 09:54:23.422613] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66428 ] 00:09:47.139 [2024-10-21 09:54:23.585729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.401 [2024-10-21 09:54:23.736484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.401 [2024-10-21 09:54:23.994372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.401 [2024-10-21 09:54:23.994445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.971 malloc1 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.971 [2024-10-21 09:54:24.325239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.971 [2024-10-21 09:54:24.325322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.971 [2024-10-21 09:54:24.325348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:09:47.971 [2024-10-21 09:54:24.325358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.971 [2024-10-21 09:54:24.327827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.971 [2024-10-21 09:54:24.327862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.971 pt1 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.971 malloc2 00:09:47.971 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 [2024-10-21 09:54:24.389233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.972 [2024-10-21 09:54:24.389295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.972 [2024-10-21 09:54:24.389319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:09:47.972 [2024-10-21 09:54:24.389329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.972 [2024-10-21 09:54:24.391720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.972 [2024-10-21 09:54:24.391754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.972 pt2 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 malloc3 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 [2024-10-21 09:54:24.461327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.972 [2024-10-21 09:54:24.461381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.972 [2024-10-21 09:54:24.461404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:47.972 [2024-10-21 09:54:24.461414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.972 [2024-10-21 09:54:24.463781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.972 [2024-10-21 09:54:24.463816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.972 pt3 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 [2024-10-21 09:54:24.473361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.972 [2024-10-21 09:54:24.475442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.972 [2024-10-21 09:54:24.475512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.972 [2024-10-21 09:54:24.475686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:09:47.972 [2024-10-21 09:54:24.475702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:47.972 [2024-10-21 09:54:24.475948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:47.972 [2024-10-21 09:54:24.476124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:09:47.972 [2024-10-21 09:54:24.476148] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:09:47.972 [2024-10-21 09:54:24.476296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.972 "name": "raid_bdev1", 00:09:47.972 "uuid": "4582a805-bc7a-4f57-9c49-953fff4766d7", 00:09:47.972 "strip_size_kb": 64, 00:09:47.972 "state": "online", 00:09:47.972 "raid_level": "concat", 00:09:47.972 "superblock": true, 00:09:47.972 "num_base_bdevs": 3, 00:09:47.972 "num_base_bdevs_discovered": 3, 00:09:47.972 "num_base_bdevs_operational": 3, 00:09:47.972 "base_bdevs_list": [ 00:09:47.972 { 00:09:47.972 "name": "pt1", 00:09:47.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.972 "is_configured": true, 00:09:47.972 "data_offset": 2048, 00:09:47.972 "data_size": 63488 00:09:47.972 }, 00:09:47.972 { 00:09:47.972 "name": "pt2", 00:09:47.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.972 "is_configured": true, 00:09:47.972 "data_offset": 2048, 00:09:47.972 "data_size": 63488 00:09:47.972 }, 00:09:47.972 { 00:09:47.972 "name": "pt3", 00:09:47.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.972 "is_configured": true, 00:09:47.972 "data_offset": 2048, 00:09:47.972 "data_size": 63488 00:09:47.972 } 00:09:47.972 ] 00:09:47.972 }' 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.972 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.543 [2024-10-21 09:54:24.885045] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.543 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.543 "name": "raid_bdev1", 00:09:48.543 "aliases": [ 00:09:48.543 "4582a805-bc7a-4f57-9c49-953fff4766d7" 00:09:48.543 ], 00:09:48.543 "product_name": "Raid Volume", 00:09:48.543 "block_size": 512, 00:09:48.543 "num_blocks": 190464, 00:09:48.543 "uuid": "4582a805-bc7a-4f57-9c49-953fff4766d7", 00:09:48.543 "assigned_rate_limits": { 00:09:48.543 "rw_ios_per_sec": 0, 00:09:48.543 "rw_mbytes_per_sec": 0, 00:09:48.543 "r_mbytes_per_sec": 0, 00:09:48.543 "w_mbytes_per_sec": 0 00:09:48.543 }, 00:09:48.543 "claimed": false, 00:09:48.543 "zoned": false, 00:09:48.543 "supported_io_types": { 00:09:48.543 "read": true, 00:09:48.543 "write": true, 00:09:48.543 "unmap": true, 00:09:48.543 "flush": true, 00:09:48.543 "reset": true, 00:09:48.543 "nvme_admin": false, 00:09:48.543 "nvme_io": false, 00:09:48.543 "nvme_io_md": false, 00:09:48.543 "write_zeroes": true, 00:09:48.543 "zcopy": false, 00:09:48.543 "get_zone_info": false, 00:09:48.543 "zone_management": false, 00:09:48.543 "zone_append": false, 00:09:48.543 "compare": false, 00:09:48.543 "compare_and_write": false, 00:09:48.543 "abort": false, 00:09:48.544 "seek_hole": false, 00:09:48.544 "seek_data": false, 00:09:48.544 "copy": false, 00:09:48.544 "nvme_iov_md": false 00:09:48.544 }, 00:09:48.544 "memory_domains": [ 00:09:48.544 { 00:09:48.544 "dma_device_id": "system", 00:09:48.544 "dma_device_type": 1 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.544 "dma_device_type": 2 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "dma_device_id": "system", 00:09:48.544 "dma_device_type": 1 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.544 "dma_device_type": 2 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "dma_device_id": "system", 00:09:48.544 "dma_device_type": 1 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.544 "dma_device_type": 2 00:09:48.544 } 00:09:48.544 ], 00:09:48.544 "driver_specific": { 00:09:48.544 "raid": { 00:09:48.544 "uuid": "4582a805-bc7a-4f57-9c49-953fff4766d7", 00:09:48.544 "strip_size_kb": 64, 00:09:48.544 "state": "online", 00:09:48.544 "raid_level": "concat", 00:09:48.544 "superblock": true, 00:09:48.544 "num_base_bdevs": 3, 00:09:48.544 "num_base_bdevs_discovered": 3, 00:09:48.544 "num_base_bdevs_operational": 3, 00:09:48.544 "base_bdevs_list": [ 00:09:48.544 { 00:09:48.544 "name": "pt1", 00:09:48.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.544 "is_configured": true, 00:09:48.544 "data_offset": 2048, 00:09:48.544 "data_size": 63488 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "name": "pt2", 00:09:48.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.544 "is_configured": true, 00:09:48.544 "data_offset": 2048, 00:09:48.544 "data_size": 63488 00:09:48.544 }, 00:09:48.544 { 00:09:48.544 "name": "pt3", 00:09:48.544 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.544 "is_configured": true, 00:09:48.544 "data_offset": 2048, 00:09:48.544 "data_size": 63488 00:09:48.544 } 00:09:48.544 ] 00:09:48.544 } 00:09:48.544 } 00:09:48.544 }' 00:09:48.544 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.544 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:48.544 pt2 00:09:48.544 pt3' 00:09:48.544 09:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:48.544 [2024-10-21 09:54:25.120442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.544 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.804 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4582a805-bc7a-4f57-9c49-953fff4766d7 00:09:48.804 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4582a805-bc7a-4f57-9c49-953fff4766d7 ']' 00:09:48.804 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.804 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.804 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.804 [2024-10-21 09:54:25.156136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.804 [2024-10-21 09:54:25.156174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.804 [2024-10-21 09:54:25.156261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.805 [2024-10-21 09:54:25.156330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.805 [2024-10-21 09:54:25.156342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.805 [2024-10-21 09:54:25.303946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:48.805 [2024-10-21 09:54:25.306165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:48.805 [2024-10-21 09:54:25.306223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:48.805 [2024-10-21 09:54:25.306276] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:48.805 [2024-10-21 09:54:25.306327] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:48.805 [2024-10-21 09:54:25.306346] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:48.805 [2024-10-21 09:54:25.306364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.805 [2024-10-21 09:54:25.306374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:09:48.805 request: 00:09:48.805 { 00:09:48.805 "name": "raid_bdev1", 00:09:48.805 "raid_level": "concat", 00:09:48.805 "base_bdevs": [ 00:09:48.805 "malloc1", 00:09:48.805 "malloc2", 00:09:48.805 "malloc3" 00:09:48.805 ], 00:09:48.805 "strip_size_kb": 64, 00:09:48.805 "superblock": false, 00:09:48.805 "method": "bdev_raid_create", 00:09:48.805 "req_id": 1 00:09:48.805 } 00:09:48.805 Got JSON-RPC error response 00:09:48.805 response: 00:09:48.805 { 00:09:48.805 "code": -17, 00:09:48.805 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:48.805 } 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.805 [2024-10-21 09:54:25.371779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.805 [2024-10-21 09:54:25.371838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.805 [2024-10-21 09:54:25.371859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:48.805 [2024-10-21 09:54:25.371869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.805 [2024-10-21 09:54:25.374351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.805 [2024-10-21 09:54:25.374387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.805 [2024-10-21 09:54:25.374473] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.805 [2024-10-21 09:54:25.374537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.805 pt1 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.805 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.065 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.065 "name": "raid_bdev1", 00:09:49.065 "uuid": "4582a805-bc7a-4f57-9c49-953fff4766d7", 00:09:49.065 "strip_size_kb": 64, 00:09:49.065 "state": "configuring", 00:09:49.065 "raid_level": "concat", 00:09:49.065 "superblock": true, 00:09:49.065 "num_base_bdevs": 3, 00:09:49.065 "num_base_bdevs_discovered": 1, 00:09:49.065 "num_base_bdevs_operational": 3, 00:09:49.065 "base_bdevs_list": [ 00:09:49.065 { 00:09:49.065 "name": "pt1", 00:09:49.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.065 "is_configured": true, 00:09:49.065 "data_offset": 2048, 00:09:49.065 "data_size": 63488 00:09:49.065 }, 00:09:49.065 { 00:09:49.065 "name": null, 00:09:49.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.065 "is_configured": false, 00:09:49.065 "data_offset": 2048, 00:09:49.065 "data_size": 63488 00:09:49.065 }, 00:09:49.065 { 00:09:49.065 "name": null, 00:09:49.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.065 "is_configured": false, 00:09:49.065 "data_offset": 2048, 00:09:49.065 "data_size": 63488 00:09:49.065 } 00:09:49.065 ] 00:09:49.065 }' 00:09:49.065 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.065 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.325 [2024-10-21 09:54:25.783286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.325 [2024-10-21 09:54:25.783379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.325 [2024-10-21 09:54:25.783411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:49.325 [2024-10-21 09:54:25.783425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.325 [2024-10-21 09:54:25.783988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.325 [2024-10-21 09:54:25.784016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.325 [2024-10-21 09:54:25.784127] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.325 [2024-10-21 09:54:25.784161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.325 pt2 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.325 [2024-10-21 09:54:25.795237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.325 "name": "raid_bdev1", 00:09:49.325 "uuid": "4582a805-bc7a-4f57-9c49-953fff4766d7", 00:09:49.325 "strip_size_kb": 64, 00:09:49.325 "state": "configuring", 00:09:49.325 "raid_level": "concat", 00:09:49.325 "superblock": true, 00:09:49.325 "num_base_bdevs": 3, 00:09:49.325 "num_base_bdevs_discovered": 1, 00:09:49.325 "num_base_bdevs_operational": 3, 00:09:49.325 "base_bdevs_list": [ 00:09:49.325 { 00:09:49.325 "name": "pt1", 00:09:49.325 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.325 "is_configured": true, 00:09:49.325 "data_offset": 2048, 00:09:49.325 "data_size": 63488 00:09:49.325 }, 00:09:49.325 { 00:09:49.325 "name": null, 00:09:49.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.325 "is_configured": false, 00:09:49.325 "data_offset": 0, 00:09:49.325 "data_size": 63488 00:09:49.325 }, 00:09:49.325 { 00:09:49.325 "name": null, 00:09:49.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.325 "is_configured": false, 00:09:49.325 "data_offset": 2048, 00:09:49.325 "data_size": 63488 00:09:49.325 } 00:09:49.325 ] 00:09:49.325 }' 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.325 09:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.895 [2024-10-21 09:54:26.250564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.895 [2024-10-21 09:54:26.250673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.895 [2024-10-21 09:54:26.250697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:49.895 [2024-10-21 09:54:26.250710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.895 [2024-10-21 09:54:26.251266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.895 [2024-10-21 09:54:26.251299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.895 [2024-10-21 09:54:26.251393] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:49.895 [2024-10-21 09:54:26.251427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.895 pt2 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.895 [2024-10-21 09:54:26.258515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:49.895 [2024-10-21 09:54:26.258581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.895 [2024-10-21 09:54:26.258597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:49.895 [2024-10-21 09:54:26.258608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.895 [2024-10-21 09:54:26.259002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.895 [2024-10-21 09:54:26.259037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:49.895 [2024-10-21 09:54:26.259100] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:49.895 [2024-10-21 09:54:26.259120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:49.895 [2024-10-21 09:54:26.259242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:49.895 [2024-10-21 09:54:26.259258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:49.895 [2024-10-21 09:54:26.259529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:49.895 [2024-10-21 09:54:26.259710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:49.895 [2024-10-21 09:54:26.259723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:49.895 [2024-10-21 09:54:26.259870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.895 pt3 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.895 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.895 "name": "raid_bdev1", 00:09:49.895 "uuid": "4582a805-bc7a-4f57-9c49-953fff4766d7", 00:09:49.895 "strip_size_kb": 64, 00:09:49.895 "state": "online", 00:09:49.895 "raid_level": "concat", 00:09:49.895 "superblock": true, 00:09:49.895 "num_base_bdevs": 3, 00:09:49.895 "num_base_bdevs_discovered": 3, 00:09:49.895 "num_base_bdevs_operational": 3, 00:09:49.895 "base_bdevs_list": [ 00:09:49.895 { 00:09:49.895 "name": "pt1", 00:09:49.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.895 "is_configured": true, 00:09:49.895 "data_offset": 2048, 00:09:49.895 "data_size": 63488 00:09:49.895 }, 00:09:49.895 { 00:09:49.895 "name": "pt2", 00:09:49.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.895 "is_configured": true, 00:09:49.895 "data_offset": 2048, 00:09:49.895 "data_size": 63488 00:09:49.895 }, 00:09:49.895 { 00:09:49.895 "name": "pt3", 00:09:49.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.895 "is_configured": true, 00:09:49.896 "data_offset": 2048, 00:09:49.896 "data_size": 63488 00:09:49.896 } 00:09:49.896 ] 00:09:49.896 }' 00:09:49.896 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.896 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.156 [2024-10-21 09:54:26.710095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.156 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.156 "name": "raid_bdev1", 00:09:50.156 "aliases": [ 00:09:50.156 "4582a805-bc7a-4f57-9c49-953fff4766d7" 00:09:50.156 ], 00:09:50.156 "product_name": "Raid Volume", 00:09:50.156 "block_size": 512, 00:09:50.156 "num_blocks": 190464, 00:09:50.156 "uuid": "4582a805-bc7a-4f57-9c49-953fff4766d7", 00:09:50.156 "assigned_rate_limits": { 00:09:50.156 "rw_ios_per_sec": 0, 00:09:50.156 "rw_mbytes_per_sec": 0, 00:09:50.156 "r_mbytes_per_sec": 0, 00:09:50.156 "w_mbytes_per_sec": 0 00:09:50.156 }, 00:09:50.156 "claimed": false, 00:09:50.156 "zoned": false, 00:09:50.156 "supported_io_types": { 00:09:50.156 "read": true, 00:09:50.156 "write": true, 00:09:50.156 "unmap": true, 00:09:50.156 "flush": true, 00:09:50.156 "reset": true, 00:09:50.156 "nvme_admin": false, 00:09:50.156 "nvme_io": false, 00:09:50.156 "nvme_io_md": false, 00:09:50.156 "write_zeroes": true, 00:09:50.156 "zcopy": false, 00:09:50.156 "get_zone_info": false, 00:09:50.156 "zone_management": false, 00:09:50.156 "zone_append": false, 00:09:50.156 "compare": false, 00:09:50.156 "compare_and_write": false, 00:09:50.156 "abort": false, 00:09:50.156 "seek_hole": false, 00:09:50.156 "seek_data": false, 00:09:50.156 "copy": false, 00:09:50.156 "nvme_iov_md": false 00:09:50.156 }, 00:09:50.156 "memory_domains": [ 00:09:50.156 { 00:09:50.156 "dma_device_id": "system", 00:09:50.156 "dma_device_type": 1 00:09:50.156 }, 00:09:50.156 { 00:09:50.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.156 "dma_device_type": 2 00:09:50.156 }, 00:09:50.156 { 00:09:50.156 "dma_device_id": "system", 00:09:50.156 "dma_device_type": 1 00:09:50.156 }, 00:09:50.156 { 00:09:50.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.156 "dma_device_type": 2 00:09:50.156 }, 00:09:50.156 { 00:09:50.156 "dma_device_id": "system", 00:09:50.156 "dma_device_type": 1 00:09:50.156 }, 00:09:50.156 { 00:09:50.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.156 "dma_device_type": 2 00:09:50.156 } 00:09:50.156 ], 00:09:50.156 "driver_specific": { 00:09:50.156 "raid": { 00:09:50.156 "uuid": "4582a805-bc7a-4f57-9c49-953fff4766d7", 00:09:50.156 "strip_size_kb": 64, 00:09:50.156 "state": "online", 00:09:50.156 "raid_level": "concat", 00:09:50.156 "superblock": true, 00:09:50.156 "num_base_bdevs": 3, 00:09:50.156 "num_base_bdevs_discovered": 3, 00:09:50.156 "num_base_bdevs_operational": 3, 00:09:50.156 "base_bdevs_list": [ 00:09:50.156 { 00:09:50.156 "name": "pt1", 00:09:50.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.156 "is_configured": true, 00:09:50.156 "data_offset": 2048, 00:09:50.156 "data_size": 63488 00:09:50.156 }, 00:09:50.156 { 00:09:50.156 "name": "pt2", 00:09:50.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.156 "is_configured": true, 00:09:50.156 "data_offset": 2048, 00:09:50.156 "data_size": 63488 00:09:50.156 }, 00:09:50.156 { 00:09:50.156 "name": "pt3", 00:09:50.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.156 "is_configured": true, 00:09:50.156 "data_offset": 2048, 00:09:50.156 "data_size": 63488 00:09:50.156 } 00:09:50.156 ] 00:09:50.156 } 00:09:50.156 } 00:09:50.156 }' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:50.416 pt2 00:09:50.416 pt3' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.416 09:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:50.416 [2024-10-21 09:54:26.993495] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.416 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4582a805-bc7a-4f57-9c49-953fff4766d7 '!=' 4582a805-bc7a-4f57-9c49-953fff4766d7 ']' 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66428 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66428 ']' 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66428 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66428 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:50.677 killing process with pid 66428 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66428' 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66428 00:09:50.677 [2024-10-21 09:54:27.071945] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.677 [2024-10-21 09:54:27.072076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.677 09:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66428 00:09:50.677 [2024-10-21 09:54:27.072148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.677 [2024-10-21 09:54:27.072162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:50.937 [2024-10-21 09:54:27.389258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.318 09:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:52.318 00:09:52.318 real 0m5.276s 00:09:52.318 user 0m7.397s 00:09:52.318 sys 0m0.948s 00:09:52.318 09:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.318 09:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.318 ************************************ 00:09:52.318 END TEST raid_superblock_test 00:09:52.318 ************************************ 00:09:52.318 09:54:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:52.318 09:54:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:52.318 09:54:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.318 09:54:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.318 ************************************ 00:09:52.318 START TEST raid_read_error_test 00:09:52.318 ************************************ 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.09Z3mzO1oJ 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66687 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66687 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 66687 ']' 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.318 09:54:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.318 [2024-10-21 09:54:28.761376] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:52.318 [2024-10-21 09:54:28.761490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66687 ] 00:09:52.578 [2024-10-21 09:54:28.924709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.578 [2024-10-21 09:54:29.073964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.837 [2024-10-21 09:54:29.338195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.837 [2024-10-21 09:54:29.338369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.097 BaseBdev1_malloc 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.097 true 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.097 [2024-10-21 09:54:29.665453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:53.097 [2024-10-21 09:54:29.665520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.097 [2024-10-21 09:54:29.665548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:53.097 [2024-10-21 09:54:29.665565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.097 [2024-10-21 09:54:29.667932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.097 [2024-10-21 09:54:29.668041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:53.097 BaseBdev1 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.097 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.357 BaseBdev2_malloc 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.357 true 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.357 [2024-10-21 09:54:29.746531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:53.357 [2024-10-21 09:54:29.746619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.357 [2024-10-21 09:54:29.746641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:53.357 [2024-10-21 09:54:29.746676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.357 [2024-10-21 09:54:29.749161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.357 [2024-10-21 09:54:29.749203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:53.357 BaseBdev2 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.357 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.357 BaseBdev3_malloc 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.358 true 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.358 [2024-10-21 09:54:29.838222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:53.358 [2024-10-21 09:54:29.838288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.358 [2024-10-21 09:54:29.838307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:53.358 [2024-10-21 09:54:29.838319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.358 [2024-10-21 09:54:29.840704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.358 [2024-10-21 09:54:29.840744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:53.358 BaseBdev3 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.358 [2024-10-21 09:54:29.850284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.358 [2024-10-21 09:54:29.852366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.358 [2024-10-21 09:54:29.852530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.358 [2024-10-21 09:54:29.852748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:09:53.358 [2024-10-21 09:54:29.852762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:53.358 [2024-10-21 09:54:29.853021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:53.358 [2024-10-21 09:54:29.853177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:09:53.358 [2024-10-21 09:54:29.853190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:09:53.358 [2024-10-21 09:54:29.853338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.358 "name": "raid_bdev1", 00:09:53.358 "uuid": "c6276ac5-9f9e-4348-b1cd-54d6d6aa5f99", 00:09:53.358 "strip_size_kb": 64, 00:09:53.358 "state": "online", 00:09:53.358 "raid_level": "concat", 00:09:53.358 "superblock": true, 00:09:53.358 "num_base_bdevs": 3, 00:09:53.358 "num_base_bdevs_discovered": 3, 00:09:53.358 "num_base_bdevs_operational": 3, 00:09:53.358 "base_bdevs_list": [ 00:09:53.358 { 00:09:53.358 "name": "BaseBdev1", 00:09:53.358 "uuid": "de9382cd-c932-5176-8ad4-79df426885e8", 00:09:53.358 "is_configured": true, 00:09:53.358 "data_offset": 2048, 00:09:53.358 "data_size": 63488 00:09:53.358 }, 00:09:53.358 { 00:09:53.358 "name": "BaseBdev2", 00:09:53.358 "uuid": "69728029-917d-5f8a-8602-6d835fac8132", 00:09:53.358 "is_configured": true, 00:09:53.358 "data_offset": 2048, 00:09:53.358 "data_size": 63488 00:09:53.358 }, 00:09:53.358 { 00:09:53.358 "name": "BaseBdev3", 00:09:53.358 "uuid": "a572cfef-6a76-55e3-bb95-1d8b1f53947f", 00:09:53.358 "is_configured": true, 00:09:53.358 "data_offset": 2048, 00:09:53.358 "data_size": 63488 00:09:53.358 } 00:09:53.358 ] 00:09:53.358 }' 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.358 09:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.927 09:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:53.927 09:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:53.927 [2024-10-21 09:54:30.386980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.865 "name": "raid_bdev1", 00:09:54.865 "uuid": "c6276ac5-9f9e-4348-b1cd-54d6d6aa5f99", 00:09:54.865 "strip_size_kb": 64, 00:09:54.865 "state": "online", 00:09:54.865 "raid_level": "concat", 00:09:54.865 "superblock": true, 00:09:54.865 "num_base_bdevs": 3, 00:09:54.865 "num_base_bdevs_discovered": 3, 00:09:54.865 "num_base_bdevs_operational": 3, 00:09:54.865 "base_bdevs_list": [ 00:09:54.865 { 00:09:54.865 "name": "BaseBdev1", 00:09:54.865 "uuid": "de9382cd-c932-5176-8ad4-79df426885e8", 00:09:54.865 "is_configured": true, 00:09:54.865 "data_offset": 2048, 00:09:54.865 "data_size": 63488 00:09:54.865 }, 00:09:54.865 { 00:09:54.865 "name": "BaseBdev2", 00:09:54.865 "uuid": "69728029-917d-5f8a-8602-6d835fac8132", 00:09:54.865 "is_configured": true, 00:09:54.865 "data_offset": 2048, 00:09:54.865 "data_size": 63488 00:09:54.865 }, 00:09:54.865 { 00:09:54.865 "name": "BaseBdev3", 00:09:54.865 "uuid": "a572cfef-6a76-55e3-bb95-1d8b1f53947f", 00:09:54.865 "is_configured": true, 00:09:54.865 "data_offset": 2048, 00:09:54.865 "data_size": 63488 00:09:54.865 } 00:09:54.865 ] 00:09:54.865 }' 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.865 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.435 [2024-10-21 09:54:31.723393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.435 [2024-10-21 09:54:31.723452] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.435 [2024-10-21 09:54:31.726035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.435 [2024-10-21 09:54:31.726085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.435 [2024-10-21 09:54:31.726128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.435 [2024-10-21 09:54:31.726139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.435 { 00:09:55.435 "results": [ 00:09:55.435 { 00:09:55.435 "job": "raid_bdev1", 00:09:55.435 "core_mask": "0x1", 00:09:55.435 "workload": "randrw", 00:09:55.435 "percentage": 50, 00:09:55.435 "status": "finished", 00:09:55.435 "queue_depth": 1, 00:09:55.435 "io_size": 131072, 00:09:55.435 "runtime": 1.336865, 00:09:55.435 "iops": 14183.182295893752, 00:09:55.435 "mibps": 1772.897786986719, 00:09:55.435 "io_failed": 1, 00:09:55.435 "io_timeout": 0, 00:09:55.435 "avg_latency_us": 99.27803130968903, 00:09:55.435 "min_latency_us": 26.047161572052403, 00:09:55.435 "max_latency_us": 1366.5257641921398 00:09:55.435 } 00:09:55.435 ], 00:09:55.435 "core_count": 1 00:09:55.435 } 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66687 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 66687 ']' 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 66687 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66687 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66687' 00:09:55.435 killing process with pid 66687 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 66687 00:09:55.435 [2024-10-21 09:54:31.769249] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.435 09:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 66687 00:09:55.435 [2024-10-21 09:54:32.020872] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.826 09:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:56.826 09:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.09Z3mzO1oJ 00:09:56.826 09:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:56.826 ************************************ 00:09:56.826 END TEST raid_read_error_test 00:09:56.826 ************************************ 00:09:56.826 09:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:56.827 09:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:56.827 09:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.827 09:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:56.827 09:54:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:56.827 00:09:56.827 real 0m4.628s 00:09:56.827 user 0m5.339s 00:09:56.827 sys 0m0.651s 00:09:56.827 09:54:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.827 09:54:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.827 09:54:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:56.827 09:54:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:56.827 09:54:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.827 09:54:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.827 ************************************ 00:09:56.827 START TEST raid_write_error_test 00:09:56.827 ************************************ 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3fczOGLOhZ 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66832 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66832 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 66832 ']' 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.827 09:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.105 [2024-10-21 09:54:33.465682] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:09:57.105 [2024-10-21 09:54:33.465923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66832 ] 00:09:57.105 [2024-10-21 09:54:33.623139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.365 [2024-10-21 09:54:33.768438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.625 [2024-10-21 09:54:34.016396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.625 [2024-10-21 09:54:34.016468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.885 BaseBdev1_malloc 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.885 true 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.885 [2024-10-21 09:54:34.352844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.885 [2024-10-21 09:54:34.352941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.885 [2024-10-21 09:54:34.352969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:57.885 [2024-10-21 09:54:34.352987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.885 [2024-10-21 09:54:34.355174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.885 [2024-10-21 09:54:34.355215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.885 BaseBdev1 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.885 BaseBdev2_malloc 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.885 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.886 true 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.886 [2024-10-21 09:54:34.411461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:57.886 [2024-10-21 09:54:34.411562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.886 [2024-10-21 09:54:34.411605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:57.886 [2024-10-21 09:54:34.411620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.886 [2024-10-21 09:54:34.413862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.886 [2024-10-21 09:54:34.413903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:57.886 BaseBdev2 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.886 BaseBdev3_malloc 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.886 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.146 true 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.146 [2024-10-21 09:54:34.483314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:58.146 [2024-10-21 09:54:34.483418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.146 [2024-10-21 09:54:34.483450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:58.146 [2024-10-21 09:54:34.483466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.146 [2024-10-21 09:54:34.485688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.146 [2024-10-21 09:54:34.485726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:58.146 BaseBdev3 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.146 [2024-10-21 09:54:34.491358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.146 [2024-10-21 09:54:34.493153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.146 [2024-10-21 09:54:34.493234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.146 [2024-10-21 09:54:34.493424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:09:58.146 [2024-10-21 09:54:34.493437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:58.146 [2024-10-21 09:54:34.493748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:58.146 [2024-10-21 09:54:34.493956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:09:58.146 [2024-10-21 09:54:34.494004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:09:58.146 [2024-10-21 09:54:34.494206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.146 "name": "raid_bdev1", 00:09:58.146 "uuid": "a80b38ad-7f51-4045-9509-26e500828d76", 00:09:58.146 "strip_size_kb": 64, 00:09:58.146 "state": "online", 00:09:58.146 "raid_level": "concat", 00:09:58.146 "superblock": true, 00:09:58.146 "num_base_bdevs": 3, 00:09:58.146 "num_base_bdevs_discovered": 3, 00:09:58.146 "num_base_bdevs_operational": 3, 00:09:58.146 "base_bdevs_list": [ 00:09:58.146 { 00:09:58.146 "name": "BaseBdev1", 00:09:58.146 "uuid": "9aeb3f27-0fe4-5f83-a849-6029e405566d", 00:09:58.146 "is_configured": true, 00:09:58.146 "data_offset": 2048, 00:09:58.146 "data_size": 63488 00:09:58.146 }, 00:09:58.146 { 00:09:58.146 "name": "BaseBdev2", 00:09:58.146 "uuid": "bf7b0d06-fae0-57da-a9ca-9dbffb97522c", 00:09:58.146 "is_configured": true, 00:09:58.146 "data_offset": 2048, 00:09:58.146 "data_size": 63488 00:09:58.146 }, 00:09:58.146 { 00:09:58.146 "name": "BaseBdev3", 00:09:58.146 "uuid": "5854484e-a6ea-5851-a86b-15735d5b8f15", 00:09:58.146 "is_configured": true, 00:09:58.146 "data_offset": 2048, 00:09:58.146 "data_size": 63488 00:09:58.146 } 00:09:58.146 ] 00:09:58.146 }' 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.146 09:54:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.406 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:58.406 09:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:58.666 [2024-10-21 09:54:35.036051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.606 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.607 09:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.607 09:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.607 09:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.607 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.607 "name": "raid_bdev1", 00:09:59.607 "uuid": "a80b38ad-7f51-4045-9509-26e500828d76", 00:09:59.607 "strip_size_kb": 64, 00:09:59.607 "state": "online", 00:09:59.607 "raid_level": "concat", 00:09:59.607 "superblock": true, 00:09:59.607 "num_base_bdevs": 3, 00:09:59.607 "num_base_bdevs_discovered": 3, 00:09:59.607 "num_base_bdevs_operational": 3, 00:09:59.607 "base_bdevs_list": [ 00:09:59.607 { 00:09:59.607 "name": "BaseBdev1", 00:09:59.607 "uuid": "9aeb3f27-0fe4-5f83-a849-6029e405566d", 00:09:59.607 "is_configured": true, 00:09:59.607 "data_offset": 2048, 00:09:59.607 "data_size": 63488 00:09:59.607 }, 00:09:59.607 { 00:09:59.607 "name": "BaseBdev2", 00:09:59.607 "uuid": "bf7b0d06-fae0-57da-a9ca-9dbffb97522c", 00:09:59.607 "is_configured": true, 00:09:59.607 "data_offset": 2048, 00:09:59.607 "data_size": 63488 00:09:59.607 }, 00:09:59.607 { 00:09:59.607 "name": "BaseBdev3", 00:09:59.607 "uuid": "5854484e-a6ea-5851-a86b-15735d5b8f15", 00:09:59.607 "is_configured": true, 00:09:59.607 "data_offset": 2048, 00:09:59.607 "data_size": 63488 00:09:59.607 } 00:09:59.607 ] 00:09:59.607 }' 00:09:59.607 09:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.607 09:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.867 [2024-10-21 09:54:36.382687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.867 [2024-10-21 09:54:36.382729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.867 [2024-10-21 09:54:36.385836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.867 [2024-10-21 09:54:36.385934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.867 [2024-10-21 09:54:36.385981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.867 [2024-10-21 09:54:36.385991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:09:59.867 { 00:09:59.867 "results": [ 00:09:59.867 { 00:09:59.867 "job": "raid_bdev1", 00:09:59.867 "core_mask": "0x1", 00:09:59.867 "workload": "randrw", 00:09:59.867 "percentage": 50, 00:09:59.867 "status": "finished", 00:09:59.867 "queue_depth": 1, 00:09:59.867 "io_size": 131072, 00:09:59.867 "runtime": 1.34737, 00:09:59.867 "iops": 15844.200182577912, 00:09:59.867 "mibps": 1980.525022822239, 00:09:59.867 "io_failed": 1, 00:09:59.867 "io_timeout": 0, 00:09:59.867 "avg_latency_us": 87.63850771980157, 00:09:59.867 "min_latency_us": 25.9353711790393, 00:09:59.867 "max_latency_us": 1531.0812227074236 00:09:59.867 } 00:09:59.867 ], 00:09:59.867 "core_count": 1 00:09:59.867 } 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66832 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 66832 ']' 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 66832 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66832 00:09:59.867 killing process with pid 66832 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66832' 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 66832 00:09:59.867 [2024-10-21 09:54:36.424703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.867 09:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 66832 00:10:00.128 [2024-10-21 09:54:36.670785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.512 09:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3fczOGLOhZ 00:10:01.512 09:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:01.512 09:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:01.512 ************************************ 00:10:01.512 END TEST raid_write_error_test 00:10:01.512 ************************************ 00:10:01.512 09:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:01.512 09:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:01.512 09:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.512 09:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.512 09:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:01.512 00:10:01.512 real 0m4.496s 00:10:01.512 user 0m5.346s 00:10:01.512 sys 0m0.569s 00:10:01.512 09:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.512 09:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.512 09:54:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:01.512 09:54:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:01.512 09:54:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:01.512 09:54:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.512 09:54:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.512 ************************************ 00:10:01.512 START TEST raid_state_function_test 00:10:01.512 ************************************ 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:01.512 Process raid pid: 66977 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66977 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66977' 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66977 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 66977 ']' 00:10:01.512 09:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.513 09:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.513 09:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.513 09:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.513 09:54:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.513 [2024-10-21 09:54:38.020211] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:10:01.513 [2024-10-21 09:54:38.020411] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.773 [2024-10-21 09:54:38.186679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.773 [2024-10-21 09:54:38.311996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.033 [2024-10-21 09:54:38.528639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.033 [2024-10-21 09:54:38.528758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.299 09:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.299 09:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:02.299 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.299 09:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.299 09:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.299 [2024-10-21 09:54:38.868890] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.299 [2024-10-21 09:54:38.869004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.299 [2024-10-21 09:54:38.869037] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.299 [2024-10-21 09:54:38.869062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.299 [2024-10-21 09:54:38.869081] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.299 [2024-10-21 09:54:38.869102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.299 09:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.300 09:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.569 09:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.569 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.569 "name": "Existed_Raid", 00:10:02.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.569 "strip_size_kb": 0, 00:10:02.569 "state": "configuring", 00:10:02.569 "raid_level": "raid1", 00:10:02.569 "superblock": false, 00:10:02.569 "num_base_bdevs": 3, 00:10:02.569 "num_base_bdevs_discovered": 0, 00:10:02.569 "num_base_bdevs_operational": 3, 00:10:02.569 "base_bdevs_list": [ 00:10:02.569 { 00:10:02.569 "name": "BaseBdev1", 00:10:02.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.569 "is_configured": false, 00:10:02.569 "data_offset": 0, 00:10:02.569 "data_size": 0 00:10:02.569 }, 00:10:02.569 { 00:10:02.569 "name": "BaseBdev2", 00:10:02.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.569 "is_configured": false, 00:10:02.569 "data_offset": 0, 00:10:02.569 "data_size": 0 00:10:02.569 }, 00:10:02.569 { 00:10:02.569 "name": "BaseBdev3", 00:10:02.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.569 "is_configured": false, 00:10:02.569 "data_offset": 0, 00:10:02.569 "data_size": 0 00:10:02.569 } 00:10:02.569 ] 00:10:02.569 }' 00:10:02.569 09:54:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.569 09:54:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.829 [2024-10-21 09:54:39.352026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.829 [2024-10-21 09:54:39.352137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.829 [2024-10-21 09:54:39.360018] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.829 [2024-10-21 09:54:39.360107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.829 [2024-10-21 09:54:39.360156] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.829 [2024-10-21 09:54:39.360180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.829 [2024-10-21 09:54:39.360199] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.829 [2024-10-21 09:54:39.360220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.829 [2024-10-21 09:54:39.407723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.829 BaseBdev1 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.829 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.090 [ 00:10:03.090 { 00:10:03.090 "name": "BaseBdev1", 00:10:03.090 "aliases": [ 00:10:03.090 "5fe77c4c-ac95-484e-9188-d1aa7d3c63ce" 00:10:03.090 ], 00:10:03.090 "product_name": "Malloc disk", 00:10:03.090 "block_size": 512, 00:10:03.090 "num_blocks": 65536, 00:10:03.090 "uuid": "5fe77c4c-ac95-484e-9188-d1aa7d3c63ce", 00:10:03.090 "assigned_rate_limits": { 00:10:03.090 "rw_ios_per_sec": 0, 00:10:03.090 "rw_mbytes_per_sec": 0, 00:10:03.090 "r_mbytes_per_sec": 0, 00:10:03.090 "w_mbytes_per_sec": 0 00:10:03.090 }, 00:10:03.090 "claimed": true, 00:10:03.090 "claim_type": "exclusive_write", 00:10:03.090 "zoned": false, 00:10:03.090 "supported_io_types": { 00:10:03.090 "read": true, 00:10:03.090 "write": true, 00:10:03.090 "unmap": true, 00:10:03.090 "flush": true, 00:10:03.090 "reset": true, 00:10:03.090 "nvme_admin": false, 00:10:03.090 "nvme_io": false, 00:10:03.090 "nvme_io_md": false, 00:10:03.090 "write_zeroes": true, 00:10:03.090 "zcopy": true, 00:10:03.090 "get_zone_info": false, 00:10:03.090 "zone_management": false, 00:10:03.090 "zone_append": false, 00:10:03.090 "compare": false, 00:10:03.090 "compare_and_write": false, 00:10:03.090 "abort": true, 00:10:03.090 "seek_hole": false, 00:10:03.090 "seek_data": false, 00:10:03.090 "copy": true, 00:10:03.090 "nvme_iov_md": false 00:10:03.090 }, 00:10:03.090 "memory_domains": [ 00:10:03.090 { 00:10:03.090 "dma_device_id": "system", 00:10:03.090 "dma_device_type": 1 00:10:03.090 }, 00:10:03.090 { 00:10:03.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.090 "dma_device_type": 2 00:10:03.090 } 00:10:03.090 ], 00:10:03.090 "driver_specific": {} 00:10:03.090 } 00:10:03.090 ] 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.090 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.090 "name": "Existed_Raid", 00:10:03.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.090 "strip_size_kb": 0, 00:10:03.090 "state": "configuring", 00:10:03.090 "raid_level": "raid1", 00:10:03.090 "superblock": false, 00:10:03.090 "num_base_bdevs": 3, 00:10:03.090 "num_base_bdevs_discovered": 1, 00:10:03.090 "num_base_bdevs_operational": 3, 00:10:03.090 "base_bdevs_list": [ 00:10:03.090 { 00:10:03.091 "name": "BaseBdev1", 00:10:03.091 "uuid": "5fe77c4c-ac95-484e-9188-d1aa7d3c63ce", 00:10:03.091 "is_configured": true, 00:10:03.091 "data_offset": 0, 00:10:03.091 "data_size": 65536 00:10:03.091 }, 00:10:03.091 { 00:10:03.091 "name": "BaseBdev2", 00:10:03.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.091 "is_configured": false, 00:10:03.091 "data_offset": 0, 00:10:03.091 "data_size": 0 00:10:03.091 }, 00:10:03.091 { 00:10:03.091 "name": "BaseBdev3", 00:10:03.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.091 "is_configured": false, 00:10:03.091 "data_offset": 0, 00:10:03.091 "data_size": 0 00:10:03.091 } 00:10:03.091 ] 00:10:03.091 }' 00:10:03.091 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.091 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.351 [2024-10-21 09:54:39.918946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.351 [2024-10-21 09:54:39.919057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.351 [2024-10-21 09:54:39.926958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.351 [2024-10-21 09:54:39.928845] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.351 [2024-10-21 09:54:39.928924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.351 [2024-10-21 09:54:39.928938] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.351 [2024-10-21 09:54:39.928948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.351 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.611 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.611 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.611 "name": "Existed_Raid", 00:10:03.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.611 "strip_size_kb": 0, 00:10:03.611 "state": "configuring", 00:10:03.612 "raid_level": "raid1", 00:10:03.612 "superblock": false, 00:10:03.612 "num_base_bdevs": 3, 00:10:03.612 "num_base_bdevs_discovered": 1, 00:10:03.612 "num_base_bdevs_operational": 3, 00:10:03.612 "base_bdevs_list": [ 00:10:03.612 { 00:10:03.612 "name": "BaseBdev1", 00:10:03.612 "uuid": "5fe77c4c-ac95-484e-9188-d1aa7d3c63ce", 00:10:03.612 "is_configured": true, 00:10:03.612 "data_offset": 0, 00:10:03.612 "data_size": 65536 00:10:03.612 }, 00:10:03.612 { 00:10:03.612 "name": "BaseBdev2", 00:10:03.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.612 "is_configured": false, 00:10:03.612 "data_offset": 0, 00:10:03.612 "data_size": 0 00:10:03.612 }, 00:10:03.612 { 00:10:03.612 "name": "BaseBdev3", 00:10:03.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.612 "is_configured": false, 00:10:03.612 "data_offset": 0, 00:10:03.612 "data_size": 0 00:10:03.612 } 00:10:03.612 ] 00:10:03.612 }' 00:10:03.612 09:54:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.612 09:54:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.872 [2024-10-21 09:54:40.334489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.872 BaseBdev2 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.872 [ 00:10:03.872 { 00:10:03.872 "name": "BaseBdev2", 00:10:03.872 "aliases": [ 00:10:03.872 "c297e76c-c1b6-4ece-bc70-7ec973721cef" 00:10:03.872 ], 00:10:03.872 "product_name": "Malloc disk", 00:10:03.872 "block_size": 512, 00:10:03.872 "num_blocks": 65536, 00:10:03.872 "uuid": "c297e76c-c1b6-4ece-bc70-7ec973721cef", 00:10:03.872 "assigned_rate_limits": { 00:10:03.872 "rw_ios_per_sec": 0, 00:10:03.872 "rw_mbytes_per_sec": 0, 00:10:03.872 "r_mbytes_per_sec": 0, 00:10:03.872 "w_mbytes_per_sec": 0 00:10:03.872 }, 00:10:03.872 "claimed": true, 00:10:03.872 "claim_type": "exclusive_write", 00:10:03.872 "zoned": false, 00:10:03.872 "supported_io_types": { 00:10:03.872 "read": true, 00:10:03.872 "write": true, 00:10:03.872 "unmap": true, 00:10:03.872 "flush": true, 00:10:03.872 "reset": true, 00:10:03.872 "nvme_admin": false, 00:10:03.872 "nvme_io": false, 00:10:03.872 "nvme_io_md": false, 00:10:03.872 "write_zeroes": true, 00:10:03.872 "zcopy": true, 00:10:03.872 "get_zone_info": false, 00:10:03.872 "zone_management": false, 00:10:03.872 "zone_append": false, 00:10:03.872 "compare": false, 00:10:03.872 "compare_and_write": false, 00:10:03.872 "abort": true, 00:10:03.872 "seek_hole": false, 00:10:03.872 "seek_data": false, 00:10:03.872 "copy": true, 00:10:03.872 "nvme_iov_md": false 00:10:03.872 }, 00:10:03.872 "memory_domains": [ 00:10:03.872 { 00:10:03.872 "dma_device_id": "system", 00:10:03.872 "dma_device_type": 1 00:10:03.872 }, 00:10:03.872 { 00:10:03.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.872 "dma_device_type": 2 00:10:03.872 } 00:10:03.872 ], 00:10:03.872 "driver_specific": {} 00:10:03.872 } 00:10:03.872 ] 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.872 "name": "Existed_Raid", 00:10:03.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.872 "strip_size_kb": 0, 00:10:03.872 "state": "configuring", 00:10:03.872 "raid_level": "raid1", 00:10:03.872 "superblock": false, 00:10:03.872 "num_base_bdevs": 3, 00:10:03.872 "num_base_bdevs_discovered": 2, 00:10:03.872 "num_base_bdevs_operational": 3, 00:10:03.872 "base_bdevs_list": [ 00:10:03.872 { 00:10:03.872 "name": "BaseBdev1", 00:10:03.872 "uuid": "5fe77c4c-ac95-484e-9188-d1aa7d3c63ce", 00:10:03.872 "is_configured": true, 00:10:03.872 "data_offset": 0, 00:10:03.872 "data_size": 65536 00:10:03.872 }, 00:10:03.872 { 00:10:03.872 "name": "BaseBdev2", 00:10:03.872 "uuid": "c297e76c-c1b6-4ece-bc70-7ec973721cef", 00:10:03.872 "is_configured": true, 00:10:03.872 "data_offset": 0, 00:10:03.872 "data_size": 65536 00:10:03.872 }, 00:10:03.872 { 00:10:03.872 "name": "BaseBdev3", 00:10:03.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.872 "is_configured": false, 00:10:03.872 "data_offset": 0, 00:10:03.872 "data_size": 0 00:10:03.872 } 00:10:03.872 ] 00:10:03.872 }' 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.872 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.464 [2024-10-21 09:54:40.862525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.464 [2024-10-21 09:54:40.862688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:04.464 [2024-10-21 09:54:40.862728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:04.464 [2024-10-21 09:54:40.863063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:04.464 [2024-10-21 09:54:40.863322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:04.464 [2024-10-21 09:54:40.863373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:10:04.464 [2024-10-21 09:54:40.863723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.464 BaseBdev3 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.464 [ 00:10:04.464 { 00:10:04.464 "name": "BaseBdev3", 00:10:04.464 "aliases": [ 00:10:04.464 "259dfea7-ecf0-4fe7-89ff-623387424321" 00:10:04.464 ], 00:10:04.464 "product_name": "Malloc disk", 00:10:04.464 "block_size": 512, 00:10:04.464 "num_blocks": 65536, 00:10:04.464 "uuid": "259dfea7-ecf0-4fe7-89ff-623387424321", 00:10:04.464 "assigned_rate_limits": { 00:10:04.464 "rw_ios_per_sec": 0, 00:10:04.464 "rw_mbytes_per_sec": 0, 00:10:04.464 "r_mbytes_per_sec": 0, 00:10:04.464 "w_mbytes_per_sec": 0 00:10:04.464 }, 00:10:04.464 "claimed": true, 00:10:04.464 "claim_type": "exclusive_write", 00:10:04.464 "zoned": false, 00:10:04.464 "supported_io_types": { 00:10:04.464 "read": true, 00:10:04.464 "write": true, 00:10:04.464 "unmap": true, 00:10:04.464 "flush": true, 00:10:04.464 "reset": true, 00:10:04.464 "nvme_admin": false, 00:10:04.464 "nvme_io": false, 00:10:04.464 "nvme_io_md": false, 00:10:04.464 "write_zeroes": true, 00:10:04.464 "zcopy": true, 00:10:04.464 "get_zone_info": false, 00:10:04.464 "zone_management": false, 00:10:04.464 "zone_append": false, 00:10:04.464 "compare": false, 00:10:04.464 "compare_and_write": false, 00:10:04.464 "abort": true, 00:10:04.464 "seek_hole": false, 00:10:04.464 "seek_data": false, 00:10:04.464 "copy": true, 00:10:04.464 "nvme_iov_md": false 00:10:04.464 }, 00:10:04.464 "memory_domains": [ 00:10:04.464 { 00:10:04.464 "dma_device_id": "system", 00:10:04.464 "dma_device_type": 1 00:10:04.464 }, 00:10:04.464 { 00:10:04.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.464 "dma_device_type": 2 00:10:04.464 } 00:10:04.464 ], 00:10:04.464 "driver_specific": {} 00:10:04.464 } 00:10:04.464 ] 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.464 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.465 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.465 "name": "Existed_Raid", 00:10:04.465 "uuid": "f864770d-d861-4831-8b5d-f98bdebb4eb9", 00:10:04.465 "strip_size_kb": 0, 00:10:04.465 "state": "online", 00:10:04.465 "raid_level": "raid1", 00:10:04.465 "superblock": false, 00:10:04.465 "num_base_bdevs": 3, 00:10:04.465 "num_base_bdevs_discovered": 3, 00:10:04.465 "num_base_bdevs_operational": 3, 00:10:04.465 "base_bdevs_list": [ 00:10:04.465 { 00:10:04.465 "name": "BaseBdev1", 00:10:04.465 "uuid": "5fe77c4c-ac95-484e-9188-d1aa7d3c63ce", 00:10:04.465 "is_configured": true, 00:10:04.465 "data_offset": 0, 00:10:04.465 "data_size": 65536 00:10:04.465 }, 00:10:04.465 { 00:10:04.465 "name": "BaseBdev2", 00:10:04.465 "uuid": "c297e76c-c1b6-4ece-bc70-7ec973721cef", 00:10:04.465 "is_configured": true, 00:10:04.465 "data_offset": 0, 00:10:04.465 "data_size": 65536 00:10:04.465 }, 00:10:04.465 { 00:10:04.465 "name": "BaseBdev3", 00:10:04.465 "uuid": "259dfea7-ecf0-4fe7-89ff-623387424321", 00:10:04.465 "is_configured": true, 00:10:04.465 "data_offset": 0, 00:10:04.465 "data_size": 65536 00:10:04.465 } 00:10:04.465 ] 00:10:04.465 }' 00:10:04.465 09:54:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.465 09:54:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.035 [2024-10-21 09:54:41.398008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.035 "name": "Existed_Raid", 00:10:05.035 "aliases": [ 00:10:05.035 "f864770d-d861-4831-8b5d-f98bdebb4eb9" 00:10:05.035 ], 00:10:05.035 "product_name": "Raid Volume", 00:10:05.035 "block_size": 512, 00:10:05.035 "num_blocks": 65536, 00:10:05.035 "uuid": "f864770d-d861-4831-8b5d-f98bdebb4eb9", 00:10:05.035 "assigned_rate_limits": { 00:10:05.035 "rw_ios_per_sec": 0, 00:10:05.035 "rw_mbytes_per_sec": 0, 00:10:05.035 "r_mbytes_per_sec": 0, 00:10:05.035 "w_mbytes_per_sec": 0 00:10:05.035 }, 00:10:05.035 "claimed": false, 00:10:05.035 "zoned": false, 00:10:05.035 "supported_io_types": { 00:10:05.035 "read": true, 00:10:05.035 "write": true, 00:10:05.035 "unmap": false, 00:10:05.035 "flush": false, 00:10:05.035 "reset": true, 00:10:05.035 "nvme_admin": false, 00:10:05.035 "nvme_io": false, 00:10:05.035 "nvme_io_md": false, 00:10:05.035 "write_zeroes": true, 00:10:05.035 "zcopy": false, 00:10:05.035 "get_zone_info": false, 00:10:05.035 "zone_management": false, 00:10:05.035 "zone_append": false, 00:10:05.035 "compare": false, 00:10:05.035 "compare_and_write": false, 00:10:05.035 "abort": false, 00:10:05.035 "seek_hole": false, 00:10:05.035 "seek_data": false, 00:10:05.035 "copy": false, 00:10:05.035 "nvme_iov_md": false 00:10:05.035 }, 00:10:05.035 "memory_domains": [ 00:10:05.035 { 00:10:05.035 "dma_device_id": "system", 00:10:05.035 "dma_device_type": 1 00:10:05.035 }, 00:10:05.035 { 00:10:05.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.035 "dma_device_type": 2 00:10:05.035 }, 00:10:05.035 { 00:10:05.035 "dma_device_id": "system", 00:10:05.035 "dma_device_type": 1 00:10:05.035 }, 00:10:05.035 { 00:10:05.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.035 "dma_device_type": 2 00:10:05.035 }, 00:10:05.035 { 00:10:05.035 "dma_device_id": "system", 00:10:05.035 "dma_device_type": 1 00:10:05.035 }, 00:10:05.035 { 00:10:05.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.035 "dma_device_type": 2 00:10:05.035 } 00:10:05.035 ], 00:10:05.035 "driver_specific": { 00:10:05.035 "raid": { 00:10:05.035 "uuid": "f864770d-d861-4831-8b5d-f98bdebb4eb9", 00:10:05.035 "strip_size_kb": 0, 00:10:05.035 "state": "online", 00:10:05.035 "raid_level": "raid1", 00:10:05.035 "superblock": false, 00:10:05.035 "num_base_bdevs": 3, 00:10:05.035 "num_base_bdevs_discovered": 3, 00:10:05.035 "num_base_bdevs_operational": 3, 00:10:05.035 "base_bdevs_list": [ 00:10:05.035 { 00:10:05.035 "name": "BaseBdev1", 00:10:05.035 "uuid": "5fe77c4c-ac95-484e-9188-d1aa7d3c63ce", 00:10:05.035 "is_configured": true, 00:10:05.035 "data_offset": 0, 00:10:05.035 "data_size": 65536 00:10:05.035 }, 00:10:05.035 { 00:10:05.035 "name": "BaseBdev2", 00:10:05.035 "uuid": "c297e76c-c1b6-4ece-bc70-7ec973721cef", 00:10:05.035 "is_configured": true, 00:10:05.035 "data_offset": 0, 00:10:05.035 "data_size": 65536 00:10:05.035 }, 00:10:05.035 { 00:10:05.035 "name": "BaseBdev3", 00:10:05.035 "uuid": "259dfea7-ecf0-4fe7-89ff-623387424321", 00:10:05.035 "is_configured": true, 00:10:05.035 "data_offset": 0, 00:10:05.035 "data_size": 65536 00:10:05.035 } 00:10:05.035 ] 00:10:05.035 } 00:10:05.035 } 00:10:05.035 }' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:05.035 BaseBdev2 00:10:05.035 BaseBdev3' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.035 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.296 [2024-10-21 09:54:41.673225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.296 "name": "Existed_Raid", 00:10:05.296 "uuid": "f864770d-d861-4831-8b5d-f98bdebb4eb9", 00:10:05.296 "strip_size_kb": 0, 00:10:05.296 "state": "online", 00:10:05.296 "raid_level": "raid1", 00:10:05.296 "superblock": false, 00:10:05.296 "num_base_bdevs": 3, 00:10:05.296 "num_base_bdevs_discovered": 2, 00:10:05.296 "num_base_bdevs_operational": 2, 00:10:05.296 "base_bdevs_list": [ 00:10:05.296 { 00:10:05.296 "name": null, 00:10:05.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.296 "is_configured": false, 00:10:05.296 "data_offset": 0, 00:10:05.296 "data_size": 65536 00:10:05.296 }, 00:10:05.296 { 00:10:05.296 "name": "BaseBdev2", 00:10:05.296 "uuid": "c297e76c-c1b6-4ece-bc70-7ec973721cef", 00:10:05.296 "is_configured": true, 00:10:05.296 "data_offset": 0, 00:10:05.296 "data_size": 65536 00:10:05.296 }, 00:10:05.296 { 00:10:05.296 "name": "BaseBdev3", 00:10:05.296 "uuid": "259dfea7-ecf0-4fe7-89ff-623387424321", 00:10:05.296 "is_configured": true, 00:10:05.296 "data_offset": 0, 00:10:05.296 "data_size": 65536 00:10:05.296 } 00:10:05.296 ] 00:10:05.296 }' 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.296 09:54:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.867 [2024-10-21 09:54:42.277344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.867 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.867 [2024-10-21 09:54:42.433545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.867 [2024-10-21 09:54:42.433686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.129 [2024-10-21 09:54:42.529339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.129 [2024-10-21 09:54:42.529485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.129 [2024-10-21 09:54:42.529529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.129 BaseBdev2 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.129 [ 00:10:06.129 { 00:10:06.129 "name": "BaseBdev2", 00:10:06.129 "aliases": [ 00:10:06.129 "a3db3b71-638d-482e-8dd2-fd17188db352" 00:10:06.129 ], 00:10:06.129 "product_name": "Malloc disk", 00:10:06.129 "block_size": 512, 00:10:06.129 "num_blocks": 65536, 00:10:06.129 "uuid": "a3db3b71-638d-482e-8dd2-fd17188db352", 00:10:06.129 "assigned_rate_limits": { 00:10:06.129 "rw_ios_per_sec": 0, 00:10:06.129 "rw_mbytes_per_sec": 0, 00:10:06.129 "r_mbytes_per_sec": 0, 00:10:06.129 "w_mbytes_per_sec": 0 00:10:06.129 }, 00:10:06.129 "claimed": false, 00:10:06.129 "zoned": false, 00:10:06.129 "supported_io_types": { 00:10:06.129 "read": true, 00:10:06.129 "write": true, 00:10:06.129 "unmap": true, 00:10:06.129 "flush": true, 00:10:06.129 "reset": true, 00:10:06.129 "nvme_admin": false, 00:10:06.129 "nvme_io": false, 00:10:06.129 "nvme_io_md": false, 00:10:06.129 "write_zeroes": true, 00:10:06.129 "zcopy": true, 00:10:06.129 "get_zone_info": false, 00:10:06.129 "zone_management": false, 00:10:06.129 "zone_append": false, 00:10:06.129 "compare": false, 00:10:06.129 "compare_and_write": false, 00:10:06.129 "abort": true, 00:10:06.129 "seek_hole": false, 00:10:06.129 "seek_data": false, 00:10:06.129 "copy": true, 00:10:06.129 "nvme_iov_md": false 00:10:06.129 }, 00:10:06.129 "memory_domains": [ 00:10:06.129 { 00:10:06.129 "dma_device_id": "system", 00:10:06.129 "dma_device_type": 1 00:10:06.129 }, 00:10:06.129 { 00:10:06.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.129 "dma_device_type": 2 00:10:06.129 } 00:10:06.129 ], 00:10:06.129 "driver_specific": {} 00:10:06.129 } 00:10:06.129 ] 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.129 BaseBdev3 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.129 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.390 [ 00:10:06.390 { 00:10:06.390 "name": "BaseBdev3", 00:10:06.390 "aliases": [ 00:10:06.390 "c103f165-df08-49ef-8772-624bd1683637" 00:10:06.390 ], 00:10:06.390 "product_name": "Malloc disk", 00:10:06.390 "block_size": 512, 00:10:06.390 "num_blocks": 65536, 00:10:06.390 "uuid": "c103f165-df08-49ef-8772-624bd1683637", 00:10:06.390 "assigned_rate_limits": { 00:10:06.390 "rw_ios_per_sec": 0, 00:10:06.390 "rw_mbytes_per_sec": 0, 00:10:06.390 "r_mbytes_per_sec": 0, 00:10:06.390 "w_mbytes_per_sec": 0 00:10:06.390 }, 00:10:06.390 "claimed": false, 00:10:06.390 "zoned": false, 00:10:06.390 "supported_io_types": { 00:10:06.390 "read": true, 00:10:06.390 "write": true, 00:10:06.390 "unmap": true, 00:10:06.390 "flush": true, 00:10:06.390 "reset": true, 00:10:06.390 "nvme_admin": false, 00:10:06.390 "nvme_io": false, 00:10:06.390 "nvme_io_md": false, 00:10:06.390 "write_zeroes": true, 00:10:06.390 "zcopy": true, 00:10:06.390 "get_zone_info": false, 00:10:06.390 "zone_management": false, 00:10:06.390 "zone_append": false, 00:10:06.390 "compare": false, 00:10:06.390 "compare_and_write": false, 00:10:06.390 "abort": true, 00:10:06.390 "seek_hole": false, 00:10:06.390 "seek_data": false, 00:10:06.390 "copy": true, 00:10:06.390 "nvme_iov_md": false 00:10:06.390 }, 00:10:06.390 "memory_domains": [ 00:10:06.390 { 00:10:06.390 "dma_device_id": "system", 00:10:06.390 "dma_device_type": 1 00:10:06.390 }, 00:10:06.390 { 00:10:06.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.390 "dma_device_type": 2 00:10:06.390 } 00:10:06.390 ], 00:10:06.390 "driver_specific": {} 00:10:06.390 } 00:10:06.390 ] 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.390 [2024-10-21 09:54:42.745802] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.390 [2024-10-21 09:54:42.745850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.390 [2024-10-21 09:54:42.745868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.390 [2024-10-21 09:54:42.747795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.390 "name": "Existed_Raid", 00:10:06.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.390 "strip_size_kb": 0, 00:10:06.390 "state": "configuring", 00:10:06.390 "raid_level": "raid1", 00:10:06.390 "superblock": false, 00:10:06.390 "num_base_bdevs": 3, 00:10:06.390 "num_base_bdevs_discovered": 2, 00:10:06.390 "num_base_bdevs_operational": 3, 00:10:06.390 "base_bdevs_list": [ 00:10:06.390 { 00:10:06.390 "name": "BaseBdev1", 00:10:06.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.390 "is_configured": false, 00:10:06.390 "data_offset": 0, 00:10:06.390 "data_size": 0 00:10:06.390 }, 00:10:06.390 { 00:10:06.390 "name": "BaseBdev2", 00:10:06.390 "uuid": "a3db3b71-638d-482e-8dd2-fd17188db352", 00:10:06.390 "is_configured": true, 00:10:06.390 "data_offset": 0, 00:10:06.390 "data_size": 65536 00:10:06.390 }, 00:10:06.390 { 00:10:06.390 "name": "BaseBdev3", 00:10:06.390 "uuid": "c103f165-df08-49ef-8772-624bd1683637", 00:10:06.390 "is_configured": true, 00:10:06.390 "data_offset": 0, 00:10:06.390 "data_size": 65536 00:10:06.390 } 00:10:06.390 ] 00:10:06.390 }' 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.390 09:54:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 [2024-10-21 09:54:43.213024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.909 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.909 "name": "Existed_Raid", 00:10:06.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.909 "strip_size_kb": 0, 00:10:06.909 "state": "configuring", 00:10:06.909 "raid_level": "raid1", 00:10:06.909 "superblock": false, 00:10:06.909 "num_base_bdevs": 3, 00:10:06.909 "num_base_bdevs_discovered": 1, 00:10:06.909 "num_base_bdevs_operational": 3, 00:10:06.909 "base_bdevs_list": [ 00:10:06.909 { 00:10:06.909 "name": "BaseBdev1", 00:10:06.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.909 "is_configured": false, 00:10:06.909 "data_offset": 0, 00:10:06.909 "data_size": 0 00:10:06.909 }, 00:10:06.909 { 00:10:06.909 "name": null, 00:10:06.909 "uuid": "a3db3b71-638d-482e-8dd2-fd17188db352", 00:10:06.909 "is_configured": false, 00:10:06.909 "data_offset": 0, 00:10:06.909 "data_size": 65536 00:10:06.909 }, 00:10:06.909 { 00:10:06.909 "name": "BaseBdev3", 00:10:06.909 "uuid": "c103f165-df08-49ef-8772-624bd1683637", 00:10:06.909 "is_configured": true, 00:10:06.909 "data_offset": 0, 00:10:06.909 "data_size": 65536 00:10:06.909 } 00:10:06.909 ] 00:10:06.909 }' 00:10:06.909 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.909 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.169 [2024-10-21 09:54:43.683667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.169 BaseBdev1 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.169 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.169 [ 00:10:07.169 { 00:10:07.169 "name": "BaseBdev1", 00:10:07.169 "aliases": [ 00:10:07.169 "574eeaf4-87d2-46e6-9a9d-582bef977ad5" 00:10:07.169 ], 00:10:07.169 "product_name": "Malloc disk", 00:10:07.169 "block_size": 512, 00:10:07.169 "num_blocks": 65536, 00:10:07.169 "uuid": "574eeaf4-87d2-46e6-9a9d-582bef977ad5", 00:10:07.169 "assigned_rate_limits": { 00:10:07.169 "rw_ios_per_sec": 0, 00:10:07.169 "rw_mbytes_per_sec": 0, 00:10:07.169 "r_mbytes_per_sec": 0, 00:10:07.169 "w_mbytes_per_sec": 0 00:10:07.169 }, 00:10:07.169 "claimed": true, 00:10:07.169 "claim_type": "exclusive_write", 00:10:07.169 "zoned": false, 00:10:07.169 "supported_io_types": { 00:10:07.169 "read": true, 00:10:07.169 "write": true, 00:10:07.169 "unmap": true, 00:10:07.169 "flush": true, 00:10:07.169 "reset": true, 00:10:07.169 "nvme_admin": false, 00:10:07.169 "nvme_io": false, 00:10:07.169 "nvme_io_md": false, 00:10:07.169 "write_zeroes": true, 00:10:07.169 "zcopy": true, 00:10:07.169 "get_zone_info": false, 00:10:07.169 "zone_management": false, 00:10:07.169 "zone_append": false, 00:10:07.169 "compare": false, 00:10:07.169 "compare_and_write": false, 00:10:07.169 "abort": true, 00:10:07.169 "seek_hole": false, 00:10:07.169 "seek_data": false, 00:10:07.169 "copy": true, 00:10:07.169 "nvme_iov_md": false 00:10:07.169 }, 00:10:07.169 "memory_domains": [ 00:10:07.169 { 00:10:07.169 "dma_device_id": "system", 00:10:07.169 "dma_device_type": 1 00:10:07.170 }, 00:10:07.170 { 00:10:07.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.170 "dma_device_type": 2 00:10:07.170 } 00:10:07.170 ], 00:10:07.170 "driver_specific": {} 00:10:07.170 } 00:10:07.170 ] 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.170 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.429 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.429 "name": "Existed_Raid", 00:10:07.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.429 "strip_size_kb": 0, 00:10:07.429 "state": "configuring", 00:10:07.429 "raid_level": "raid1", 00:10:07.429 "superblock": false, 00:10:07.429 "num_base_bdevs": 3, 00:10:07.429 "num_base_bdevs_discovered": 2, 00:10:07.429 "num_base_bdevs_operational": 3, 00:10:07.429 "base_bdevs_list": [ 00:10:07.429 { 00:10:07.429 "name": "BaseBdev1", 00:10:07.429 "uuid": "574eeaf4-87d2-46e6-9a9d-582bef977ad5", 00:10:07.429 "is_configured": true, 00:10:07.429 "data_offset": 0, 00:10:07.429 "data_size": 65536 00:10:07.429 }, 00:10:07.429 { 00:10:07.429 "name": null, 00:10:07.429 "uuid": "a3db3b71-638d-482e-8dd2-fd17188db352", 00:10:07.429 "is_configured": false, 00:10:07.429 "data_offset": 0, 00:10:07.429 "data_size": 65536 00:10:07.429 }, 00:10:07.429 { 00:10:07.429 "name": "BaseBdev3", 00:10:07.429 "uuid": "c103f165-df08-49ef-8772-624bd1683637", 00:10:07.429 "is_configured": true, 00:10:07.429 "data_offset": 0, 00:10:07.429 "data_size": 65536 00:10:07.429 } 00:10:07.429 ] 00:10:07.429 }' 00:10:07.429 09:54:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.429 09:54:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.688 [2024-10-21 09:54:44.238834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.688 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.948 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.948 "name": "Existed_Raid", 00:10:07.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.948 "strip_size_kb": 0, 00:10:07.948 "state": "configuring", 00:10:07.948 "raid_level": "raid1", 00:10:07.948 "superblock": false, 00:10:07.948 "num_base_bdevs": 3, 00:10:07.948 "num_base_bdevs_discovered": 1, 00:10:07.948 "num_base_bdevs_operational": 3, 00:10:07.948 "base_bdevs_list": [ 00:10:07.948 { 00:10:07.948 "name": "BaseBdev1", 00:10:07.948 "uuid": "574eeaf4-87d2-46e6-9a9d-582bef977ad5", 00:10:07.948 "is_configured": true, 00:10:07.948 "data_offset": 0, 00:10:07.948 "data_size": 65536 00:10:07.948 }, 00:10:07.948 { 00:10:07.948 "name": null, 00:10:07.948 "uuid": "a3db3b71-638d-482e-8dd2-fd17188db352", 00:10:07.948 "is_configured": false, 00:10:07.948 "data_offset": 0, 00:10:07.948 "data_size": 65536 00:10:07.948 }, 00:10:07.948 { 00:10:07.948 "name": null, 00:10:07.948 "uuid": "c103f165-df08-49ef-8772-624bd1683637", 00:10:07.948 "is_configured": false, 00:10:07.948 "data_offset": 0, 00:10:07.948 "data_size": 65536 00:10:07.948 } 00:10:07.948 ] 00:10:07.948 }' 00:10:07.948 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.948 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.208 [2024-10-21 09:54:44.773991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.208 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.467 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.467 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.467 "name": "Existed_Raid", 00:10:08.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.467 "strip_size_kb": 0, 00:10:08.467 "state": "configuring", 00:10:08.467 "raid_level": "raid1", 00:10:08.467 "superblock": false, 00:10:08.467 "num_base_bdevs": 3, 00:10:08.467 "num_base_bdevs_discovered": 2, 00:10:08.467 "num_base_bdevs_operational": 3, 00:10:08.467 "base_bdevs_list": [ 00:10:08.467 { 00:10:08.467 "name": "BaseBdev1", 00:10:08.467 "uuid": "574eeaf4-87d2-46e6-9a9d-582bef977ad5", 00:10:08.467 "is_configured": true, 00:10:08.467 "data_offset": 0, 00:10:08.467 "data_size": 65536 00:10:08.467 }, 00:10:08.468 { 00:10:08.468 "name": null, 00:10:08.468 "uuid": "a3db3b71-638d-482e-8dd2-fd17188db352", 00:10:08.468 "is_configured": false, 00:10:08.468 "data_offset": 0, 00:10:08.468 "data_size": 65536 00:10:08.468 }, 00:10:08.468 { 00:10:08.468 "name": "BaseBdev3", 00:10:08.468 "uuid": "c103f165-df08-49ef-8772-624bd1683637", 00:10:08.468 "is_configured": true, 00:10:08.468 "data_offset": 0, 00:10:08.468 "data_size": 65536 00:10:08.468 } 00:10:08.468 ] 00:10:08.468 }' 00:10:08.468 09:54:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.468 09:54:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.727 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.727 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.727 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.727 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.727 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.727 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:08.727 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.727 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.727 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.727 [2024-10-21 09:54:45.277137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.987 "name": "Existed_Raid", 00:10:08.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.987 "strip_size_kb": 0, 00:10:08.987 "state": "configuring", 00:10:08.987 "raid_level": "raid1", 00:10:08.987 "superblock": false, 00:10:08.987 "num_base_bdevs": 3, 00:10:08.987 "num_base_bdevs_discovered": 1, 00:10:08.987 "num_base_bdevs_operational": 3, 00:10:08.987 "base_bdevs_list": [ 00:10:08.987 { 00:10:08.987 "name": null, 00:10:08.987 "uuid": "574eeaf4-87d2-46e6-9a9d-582bef977ad5", 00:10:08.987 "is_configured": false, 00:10:08.987 "data_offset": 0, 00:10:08.987 "data_size": 65536 00:10:08.987 }, 00:10:08.987 { 00:10:08.987 "name": null, 00:10:08.987 "uuid": "a3db3b71-638d-482e-8dd2-fd17188db352", 00:10:08.987 "is_configured": false, 00:10:08.987 "data_offset": 0, 00:10:08.987 "data_size": 65536 00:10:08.987 }, 00:10:08.987 { 00:10:08.987 "name": "BaseBdev3", 00:10:08.987 "uuid": "c103f165-df08-49ef-8772-624bd1683637", 00:10:08.987 "is_configured": true, 00:10:08.987 "data_offset": 0, 00:10:08.987 "data_size": 65536 00:10:08.987 } 00:10:08.987 ] 00:10:08.987 }' 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.987 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.249 [2024-10-21 09:54:45.811980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.249 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.511 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.511 "name": "Existed_Raid", 00:10:09.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.511 "strip_size_kb": 0, 00:10:09.511 "state": "configuring", 00:10:09.511 "raid_level": "raid1", 00:10:09.511 "superblock": false, 00:10:09.511 "num_base_bdevs": 3, 00:10:09.511 "num_base_bdevs_discovered": 2, 00:10:09.511 "num_base_bdevs_operational": 3, 00:10:09.511 "base_bdevs_list": [ 00:10:09.511 { 00:10:09.511 "name": null, 00:10:09.511 "uuid": "574eeaf4-87d2-46e6-9a9d-582bef977ad5", 00:10:09.511 "is_configured": false, 00:10:09.511 "data_offset": 0, 00:10:09.511 "data_size": 65536 00:10:09.511 }, 00:10:09.511 { 00:10:09.511 "name": "BaseBdev2", 00:10:09.511 "uuid": "a3db3b71-638d-482e-8dd2-fd17188db352", 00:10:09.511 "is_configured": true, 00:10:09.511 "data_offset": 0, 00:10:09.511 "data_size": 65536 00:10:09.511 }, 00:10:09.511 { 00:10:09.511 "name": "BaseBdev3", 00:10:09.511 "uuid": "c103f165-df08-49ef-8772-624bd1683637", 00:10:09.511 "is_configured": true, 00:10:09.511 "data_offset": 0, 00:10:09.511 "data_size": 65536 00:10:09.511 } 00:10:09.511 ] 00:10:09.511 }' 00:10:09.511 09:54:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.511 09:54:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 574eeaf4-87d2-46e6-9a9d-582bef977ad5 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.771 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.031 [2024-10-21 09:54:46.390358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:10.031 [2024-10-21 09:54:46.390406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:10:10.031 [2024-10-21 09:54:46.390415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:10.031 [2024-10-21 09:54:46.390691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:10.031 [2024-10-21 09:54:46.390894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:10:10.031 [2024-10-21 09:54:46.390917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:10:10.031 [2024-10-21 09:54:46.391188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.031 NewBaseBdev 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.031 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.031 [ 00:10:10.031 { 00:10:10.031 "name": "NewBaseBdev", 00:10:10.031 "aliases": [ 00:10:10.031 "574eeaf4-87d2-46e6-9a9d-582bef977ad5" 00:10:10.031 ], 00:10:10.031 "product_name": "Malloc disk", 00:10:10.031 "block_size": 512, 00:10:10.031 "num_blocks": 65536, 00:10:10.031 "uuid": "574eeaf4-87d2-46e6-9a9d-582bef977ad5", 00:10:10.031 "assigned_rate_limits": { 00:10:10.031 "rw_ios_per_sec": 0, 00:10:10.031 "rw_mbytes_per_sec": 0, 00:10:10.031 "r_mbytes_per_sec": 0, 00:10:10.031 "w_mbytes_per_sec": 0 00:10:10.031 }, 00:10:10.031 "claimed": true, 00:10:10.031 "claim_type": "exclusive_write", 00:10:10.031 "zoned": false, 00:10:10.031 "supported_io_types": { 00:10:10.031 "read": true, 00:10:10.031 "write": true, 00:10:10.031 "unmap": true, 00:10:10.031 "flush": true, 00:10:10.031 "reset": true, 00:10:10.031 "nvme_admin": false, 00:10:10.031 "nvme_io": false, 00:10:10.031 "nvme_io_md": false, 00:10:10.031 "write_zeroes": true, 00:10:10.031 "zcopy": true, 00:10:10.031 "get_zone_info": false, 00:10:10.031 "zone_management": false, 00:10:10.031 "zone_append": false, 00:10:10.031 "compare": false, 00:10:10.031 "compare_and_write": false, 00:10:10.031 "abort": true, 00:10:10.031 "seek_hole": false, 00:10:10.031 "seek_data": false, 00:10:10.031 "copy": true, 00:10:10.031 "nvme_iov_md": false 00:10:10.031 }, 00:10:10.031 "memory_domains": [ 00:10:10.032 { 00:10:10.032 "dma_device_id": "system", 00:10:10.032 "dma_device_type": 1 00:10:10.032 }, 00:10:10.032 { 00:10:10.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.032 "dma_device_type": 2 00:10:10.032 } 00:10:10.032 ], 00:10:10.032 "driver_specific": {} 00:10:10.032 } 00:10:10.032 ] 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.032 "name": "Existed_Raid", 00:10:10.032 "uuid": "b88e9123-6f8f-46f3-bf4e-c80984b5e3c8", 00:10:10.032 "strip_size_kb": 0, 00:10:10.032 "state": "online", 00:10:10.032 "raid_level": "raid1", 00:10:10.032 "superblock": false, 00:10:10.032 "num_base_bdevs": 3, 00:10:10.032 "num_base_bdevs_discovered": 3, 00:10:10.032 "num_base_bdevs_operational": 3, 00:10:10.032 "base_bdevs_list": [ 00:10:10.032 { 00:10:10.032 "name": "NewBaseBdev", 00:10:10.032 "uuid": "574eeaf4-87d2-46e6-9a9d-582bef977ad5", 00:10:10.032 "is_configured": true, 00:10:10.032 "data_offset": 0, 00:10:10.032 "data_size": 65536 00:10:10.032 }, 00:10:10.032 { 00:10:10.032 "name": "BaseBdev2", 00:10:10.032 "uuid": "a3db3b71-638d-482e-8dd2-fd17188db352", 00:10:10.032 "is_configured": true, 00:10:10.032 "data_offset": 0, 00:10:10.032 "data_size": 65536 00:10:10.032 }, 00:10:10.032 { 00:10:10.032 "name": "BaseBdev3", 00:10:10.032 "uuid": "c103f165-df08-49ef-8772-624bd1683637", 00:10:10.032 "is_configured": true, 00:10:10.032 "data_offset": 0, 00:10:10.032 "data_size": 65536 00:10:10.032 } 00:10:10.032 ] 00:10:10.032 }' 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.032 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.292 [2024-10-21 09:54:46.818008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.292 "name": "Existed_Raid", 00:10:10.292 "aliases": [ 00:10:10.292 "b88e9123-6f8f-46f3-bf4e-c80984b5e3c8" 00:10:10.292 ], 00:10:10.292 "product_name": "Raid Volume", 00:10:10.292 "block_size": 512, 00:10:10.292 "num_blocks": 65536, 00:10:10.292 "uuid": "b88e9123-6f8f-46f3-bf4e-c80984b5e3c8", 00:10:10.292 "assigned_rate_limits": { 00:10:10.292 "rw_ios_per_sec": 0, 00:10:10.292 "rw_mbytes_per_sec": 0, 00:10:10.292 "r_mbytes_per_sec": 0, 00:10:10.292 "w_mbytes_per_sec": 0 00:10:10.292 }, 00:10:10.292 "claimed": false, 00:10:10.292 "zoned": false, 00:10:10.292 "supported_io_types": { 00:10:10.292 "read": true, 00:10:10.292 "write": true, 00:10:10.292 "unmap": false, 00:10:10.292 "flush": false, 00:10:10.292 "reset": true, 00:10:10.292 "nvme_admin": false, 00:10:10.292 "nvme_io": false, 00:10:10.292 "nvme_io_md": false, 00:10:10.292 "write_zeroes": true, 00:10:10.292 "zcopy": false, 00:10:10.292 "get_zone_info": false, 00:10:10.292 "zone_management": false, 00:10:10.292 "zone_append": false, 00:10:10.292 "compare": false, 00:10:10.292 "compare_and_write": false, 00:10:10.292 "abort": false, 00:10:10.292 "seek_hole": false, 00:10:10.292 "seek_data": false, 00:10:10.292 "copy": false, 00:10:10.292 "nvme_iov_md": false 00:10:10.292 }, 00:10:10.292 "memory_domains": [ 00:10:10.292 { 00:10:10.292 "dma_device_id": "system", 00:10:10.292 "dma_device_type": 1 00:10:10.292 }, 00:10:10.292 { 00:10:10.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.292 "dma_device_type": 2 00:10:10.292 }, 00:10:10.292 { 00:10:10.292 "dma_device_id": "system", 00:10:10.292 "dma_device_type": 1 00:10:10.292 }, 00:10:10.292 { 00:10:10.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.292 "dma_device_type": 2 00:10:10.292 }, 00:10:10.292 { 00:10:10.292 "dma_device_id": "system", 00:10:10.292 "dma_device_type": 1 00:10:10.292 }, 00:10:10.292 { 00:10:10.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.292 "dma_device_type": 2 00:10:10.292 } 00:10:10.292 ], 00:10:10.292 "driver_specific": { 00:10:10.292 "raid": { 00:10:10.292 "uuid": "b88e9123-6f8f-46f3-bf4e-c80984b5e3c8", 00:10:10.292 "strip_size_kb": 0, 00:10:10.292 "state": "online", 00:10:10.292 "raid_level": "raid1", 00:10:10.292 "superblock": false, 00:10:10.292 "num_base_bdevs": 3, 00:10:10.292 "num_base_bdevs_discovered": 3, 00:10:10.292 "num_base_bdevs_operational": 3, 00:10:10.292 "base_bdevs_list": [ 00:10:10.292 { 00:10:10.292 "name": "NewBaseBdev", 00:10:10.292 "uuid": "574eeaf4-87d2-46e6-9a9d-582bef977ad5", 00:10:10.292 "is_configured": true, 00:10:10.292 "data_offset": 0, 00:10:10.292 "data_size": 65536 00:10:10.292 }, 00:10:10.292 { 00:10:10.292 "name": "BaseBdev2", 00:10:10.292 "uuid": "a3db3b71-638d-482e-8dd2-fd17188db352", 00:10:10.292 "is_configured": true, 00:10:10.292 "data_offset": 0, 00:10:10.292 "data_size": 65536 00:10:10.292 }, 00:10:10.292 { 00:10:10.292 "name": "BaseBdev3", 00:10:10.292 "uuid": "c103f165-df08-49ef-8772-624bd1683637", 00:10:10.292 "is_configured": true, 00:10:10.292 "data_offset": 0, 00:10:10.292 "data_size": 65536 00:10:10.292 } 00:10:10.292 ] 00:10:10.292 } 00:10:10.292 } 00:10:10.292 }' 00:10:10.292 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:10.553 BaseBdev2 00:10:10.553 BaseBdev3' 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.553 09:54:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.553 09:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.553 09:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.554 [2024-10-21 09:54:47.065258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.554 [2024-10-21 09:54:47.065303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.554 [2024-10-21 09:54:47.065385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.554 [2024-10-21 09:54:47.065709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.554 [2024-10-21 09:54:47.065746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66977 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 66977 ']' 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 66977 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66977 00:10:10.554 killing process with pid 66977 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66977' 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 66977 00:10:10.554 [2024-10-21 09:54:47.110447] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.554 09:54:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 66977 00:10:11.124 [2024-10-21 09:54:47.414072] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.062 09:54:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.062 00:10:12.062 real 0m10.623s 00:10:12.062 user 0m16.927s 00:10:12.062 sys 0m1.831s 00:10:12.062 09:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.062 09:54:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.062 ************************************ 00:10:12.062 END TEST raid_state_function_test 00:10:12.062 ************************************ 00:10:12.062 09:54:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:12.062 09:54:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:12.062 09:54:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.062 09:54:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.062 ************************************ 00:10:12.062 START TEST raid_state_function_test_sb 00:10:12.062 ************************************ 00:10:12.062 09:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:10:12.062 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67597 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67597' 00:10:12.063 Process raid pid: 67597 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67597 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 67597 ']' 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.063 09:54:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.322 [2024-10-21 09:54:48.712843] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:10:12.322 [2024-10-21 09:54:48.712977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.322 [2024-10-21 09:54:48.874873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.581 [2024-10-21 09:54:48.997224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.841 [2024-10-21 09:54:49.219879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.841 [2024-10-21 09:54:49.219930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.114 [2024-10-21 09:54:49.550952] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.114 [2024-10-21 09:54:49.551001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.114 [2024-10-21 09:54:49.551012] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.114 [2024-10-21 09:54:49.551022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.114 [2024-10-21 09:54:49.551028] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.114 [2024-10-21 09:54:49.551047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.114 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.115 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.115 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.115 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.115 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.115 "name": "Existed_Raid", 00:10:13.115 "uuid": "8d99b4cb-6452-40df-8172-414dde719360", 00:10:13.115 "strip_size_kb": 0, 00:10:13.115 "state": "configuring", 00:10:13.115 "raid_level": "raid1", 00:10:13.115 "superblock": true, 00:10:13.115 "num_base_bdevs": 3, 00:10:13.115 "num_base_bdevs_discovered": 0, 00:10:13.115 "num_base_bdevs_operational": 3, 00:10:13.115 "base_bdevs_list": [ 00:10:13.115 { 00:10:13.115 "name": "BaseBdev1", 00:10:13.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.115 "is_configured": false, 00:10:13.115 "data_offset": 0, 00:10:13.115 "data_size": 0 00:10:13.115 }, 00:10:13.115 { 00:10:13.115 "name": "BaseBdev2", 00:10:13.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.115 "is_configured": false, 00:10:13.115 "data_offset": 0, 00:10:13.115 "data_size": 0 00:10:13.115 }, 00:10:13.115 { 00:10:13.115 "name": "BaseBdev3", 00:10:13.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.115 "is_configured": false, 00:10:13.115 "data_offset": 0, 00:10:13.115 "data_size": 0 00:10:13.115 } 00:10:13.115 ] 00:10:13.115 }' 00:10:13.115 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.115 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.379 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.379 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.379 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.379 [2024-10-21 09:54:49.970264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.379 [2024-10-21 09:54:49.970311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:10:13.640 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.640 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.640 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.640 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.640 [2024-10-21 09:54:49.982246] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.640 [2024-10-21 09:54:49.982290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.640 [2024-10-21 09:54:49.982316] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.640 [2024-10-21 09:54:49.982327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.640 [2024-10-21 09:54:49.982334] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.640 [2024-10-21 09:54:49.982344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.640 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.640 09:54:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.640 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.640 09:54:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.640 [2024-10-21 09:54:50.032786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.640 BaseBdev1 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.640 [ 00:10:13.640 { 00:10:13.640 "name": "BaseBdev1", 00:10:13.640 "aliases": [ 00:10:13.640 "d747dd4c-eb87-43d8-b765-5ec4e90943e5" 00:10:13.640 ], 00:10:13.640 "product_name": "Malloc disk", 00:10:13.640 "block_size": 512, 00:10:13.640 "num_blocks": 65536, 00:10:13.640 "uuid": "d747dd4c-eb87-43d8-b765-5ec4e90943e5", 00:10:13.640 "assigned_rate_limits": { 00:10:13.640 "rw_ios_per_sec": 0, 00:10:13.640 "rw_mbytes_per_sec": 0, 00:10:13.640 "r_mbytes_per_sec": 0, 00:10:13.640 "w_mbytes_per_sec": 0 00:10:13.640 }, 00:10:13.640 "claimed": true, 00:10:13.640 "claim_type": "exclusive_write", 00:10:13.640 "zoned": false, 00:10:13.640 "supported_io_types": { 00:10:13.640 "read": true, 00:10:13.640 "write": true, 00:10:13.640 "unmap": true, 00:10:13.640 "flush": true, 00:10:13.640 "reset": true, 00:10:13.640 "nvme_admin": false, 00:10:13.640 "nvme_io": false, 00:10:13.640 "nvme_io_md": false, 00:10:13.640 "write_zeroes": true, 00:10:13.640 "zcopy": true, 00:10:13.640 "get_zone_info": false, 00:10:13.640 "zone_management": false, 00:10:13.640 "zone_append": false, 00:10:13.640 "compare": false, 00:10:13.640 "compare_and_write": false, 00:10:13.640 "abort": true, 00:10:13.640 "seek_hole": false, 00:10:13.640 "seek_data": false, 00:10:13.640 "copy": true, 00:10:13.640 "nvme_iov_md": false 00:10:13.640 }, 00:10:13.640 "memory_domains": [ 00:10:13.640 { 00:10:13.640 "dma_device_id": "system", 00:10:13.640 "dma_device_type": 1 00:10:13.640 }, 00:10:13.640 { 00:10:13.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.640 "dma_device_type": 2 00:10:13.640 } 00:10:13.640 ], 00:10:13.640 "driver_specific": {} 00:10:13.640 } 00:10:13.640 ] 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.640 "name": "Existed_Raid", 00:10:13.640 "uuid": "e1ad07bf-c3a9-4c4f-bab7-d3e5a310c9ce", 00:10:13.640 "strip_size_kb": 0, 00:10:13.640 "state": "configuring", 00:10:13.640 "raid_level": "raid1", 00:10:13.640 "superblock": true, 00:10:13.640 "num_base_bdevs": 3, 00:10:13.640 "num_base_bdevs_discovered": 1, 00:10:13.640 "num_base_bdevs_operational": 3, 00:10:13.640 "base_bdevs_list": [ 00:10:13.640 { 00:10:13.640 "name": "BaseBdev1", 00:10:13.640 "uuid": "d747dd4c-eb87-43d8-b765-5ec4e90943e5", 00:10:13.640 "is_configured": true, 00:10:13.640 "data_offset": 2048, 00:10:13.640 "data_size": 63488 00:10:13.640 }, 00:10:13.640 { 00:10:13.640 "name": "BaseBdev2", 00:10:13.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.640 "is_configured": false, 00:10:13.640 "data_offset": 0, 00:10:13.640 "data_size": 0 00:10:13.640 }, 00:10:13.640 { 00:10:13.640 "name": "BaseBdev3", 00:10:13.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.640 "is_configured": false, 00:10:13.640 "data_offset": 0, 00:10:13.640 "data_size": 0 00:10:13.640 } 00:10:13.640 ] 00:10:13.640 }' 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.640 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.900 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.900 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.900 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.900 [2024-10-21 09:54:50.484054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.900 [2024-10-21 09:54:50.484175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:10:13.900 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.900 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.900 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.900 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.900 [2024-10-21 09:54:50.492098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.900 [2024-10-21 09:54:50.493925] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.900 [2024-10-21 09:54:50.493963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.900 [2024-10-21 09:54:50.493973] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.900 [2024-10-21 09:54:50.493982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.160 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.160 "name": "Existed_Raid", 00:10:14.160 "uuid": "8a0cdcd5-e716-46bd-a95c-a7b71a1dc8e1", 00:10:14.160 "strip_size_kb": 0, 00:10:14.160 "state": "configuring", 00:10:14.160 "raid_level": "raid1", 00:10:14.160 "superblock": true, 00:10:14.160 "num_base_bdevs": 3, 00:10:14.160 "num_base_bdevs_discovered": 1, 00:10:14.160 "num_base_bdevs_operational": 3, 00:10:14.160 "base_bdevs_list": [ 00:10:14.160 { 00:10:14.160 "name": "BaseBdev1", 00:10:14.160 "uuid": "d747dd4c-eb87-43d8-b765-5ec4e90943e5", 00:10:14.160 "is_configured": true, 00:10:14.160 "data_offset": 2048, 00:10:14.160 "data_size": 63488 00:10:14.160 }, 00:10:14.160 { 00:10:14.160 "name": "BaseBdev2", 00:10:14.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.160 "is_configured": false, 00:10:14.160 "data_offset": 0, 00:10:14.160 "data_size": 0 00:10:14.160 }, 00:10:14.160 { 00:10:14.160 "name": "BaseBdev3", 00:10:14.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.160 "is_configured": false, 00:10:14.161 "data_offset": 0, 00:10:14.161 "data_size": 0 00:10:14.161 } 00:10:14.161 ] 00:10:14.161 }' 00:10:14.161 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.161 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.420 [2024-10-21 09:54:50.961306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.420 BaseBdev2 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.420 09:54:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.420 [ 00:10:14.420 { 00:10:14.420 "name": "BaseBdev2", 00:10:14.420 "aliases": [ 00:10:14.420 "17f8922f-ac02-4eb3-9ec0-1d958f06d6fa" 00:10:14.420 ], 00:10:14.420 "product_name": "Malloc disk", 00:10:14.420 "block_size": 512, 00:10:14.420 "num_blocks": 65536, 00:10:14.420 "uuid": "17f8922f-ac02-4eb3-9ec0-1d958f06d6fa", 00:10:14.420 "assigned_rate_limits": { 00:10:14.420 "rw_ios_per_sec": 0, 00:10:14.420 "rw_mbytes_per_sec": 0, 00:10:14.420 "r_mbytes_per_sec": 0, 00:10:14.420 "w_mbytes_per_sec": 0 00:10:14.420 }, 00:10:14.420 "claimed": true, 00:10:14.421 "claim_type": "exclusive_write", 00:10:14.421 "zoned": false, 00:10:14.421 "supported_io_types": { 00:10:14.421 "read": true, 00:10:14.421 "write": true, 00:10:14.421 "unmap": true, 00:10:14.421 "flush": true, 00:10:14.421 "reset": true, 00:10:14.421 "nvme_admin": false, 00:10:14.421 "nvme_io": false, 00:10:14.421 "nvme_io_md": false, 00:10:14.421 "write_zeroes": true, 00:10:14.421 "zcopy": true, 00:10:14.421 "get_zone_info": false, 00:10:14.421 "zone_management": false, 00:10:14.421 "zone_append": false, 00:10:14.421 "compare": false, 00:10:14.421 "compare_and_write": false, 00:10:14.421 "abort": true, 00:10:14.421 "seek_hole": false, 00:10:14.421 "seek_data": false, 00:10:14.421 "copy": true, 00:10:14.421 "nvme_iov_md": false 00:10:14.421 }, 00:10:14.421 "memory_domains": [ 00:10:14.421 { 00:10:14.421 "dma_device_id": "system", 00:10:14.421 "dma_device_type": 1 00:10:14.421 }, 00:10:14.421 { 00:10:14.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.421 "dma_device_type": 2 00:10:14.421 } 00:10:14.421 ], 00:10:14.421 "driver_specific": {} 00:10:14.421 } 00:10:14.421 ] 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.421 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.681 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.681 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.681 "name": "Existed_Raid", 00:10:14.681 "uuid": "8a0cdcd5-e716-46bd-a95c-a7b71a1dc8e1", 00:10:14.681 "strip_size_kb": 0, 00:10:14.681 "state": "configuring", 00:10:14.681 "raid_level": "raid1", 00:10:14.681 "superblock": true, 00:10:14.681 "num_base_bdevs": 3, 00:10:14.681 "num_base_bdevs_discovered": 2, 00:10:14.681 "num_base_bdevs_operational": 3, 00:10:14.681 "base_bdevs_list": [ 00:10:14.681 { 00:10:14.681 "name": "BaseBdev1", 00:10:14.681 "uuid": "d747dd4c-eb87-43d8-b765-5ec4e90943e5", 00:10:14.681 "is_configured": true, 00:10:14.681 "data_offset": 2048, 00:10:14.681 "data_size": 63488 00:10:14.681 }, 00:10:14.681 { 00:10:14.681 "name": "BaseBdev2", 00:10:14.681 "uuid": "17f8922f-ac02-4eb3-9ec0-1d958f06d6fa", 00:10:14.681 "is_configured": true, 00:10:14.681 "data_offset": 2048, 00:10:14.681 "data_size": 63488 00:10:14.681 }, 00:10:14.681 { 00:10:14.681 "name": "BaseBdev3", 00:10:14.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.681 "is_configured": false, 00:10:14.681 "data_offset": 0, 00:10:14.681 "data_size": 0 00:10:14.681 } 00:10:14.681 ] 00:10:14.681 }' 00:10:14.681 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.681 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.940 [2024-10-21 09:54:51.526349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.940 [2024-10-21 09:54:51.526799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:14.940 [2024-10-21 09:54:51.526869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:14.940 [2024-10-21 09:54:51.527178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:14.940 [2024-10-21 09:54:51.527418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:14.940 [2024-10-21 09:54:51.527468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:10:14.940 BaseBdev3 00:10:14.940 [2024-10-21 09:54:51.527686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.940 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.200 [ 00:10:15.200 { 00:10:15.200 "name": "BaseBdev3", 00:10:15.200 "aliases": [ 00:10:15.200 "c758a8f9-1aa2-407b-bdae-cf948d869312" 00:10:15.200 ], 00:10:15.200 "product_name": "Malloc disk", 00:10:15.200 "block_size": 512, 00:10:15.200 "num_blocks": 65536, 00:10:15.200 "uuid": "c758a8f9-1aa2-407b-bdae-cf948d869312", 00:10:15.200 "assigned_rate_limits": { 00:10:15.200 "rw_ios_per_sec": 0, 00:10:15.200 "rw_mbytes_per_sec": 0, 00:10:15.200 "r_mbytes_per_sec": 0, 00:10:15.200 "w_mbytes_per_sec": 0 00:10:15.200 }, 00:10:15.200 "claimed": true, 00:10:15.200 "claim_type": "exclusive_write", 00:10:15.200 "zoned": false, 00:10:15.200 "supported_io_types": { 00:10:15.200 "read": true, 00:10:15.200 "write": true, 00:10:15.200 "unmap": true, 00:10:15.200 "flush": true, 00:10:15.200 "reset": true, 00:10:15.200 "nvme_admin": false, 00:10:15.200 "nvme_io": false, 00:10:15.200 "nvme_io_md": false, 00:10:15.200 "write_zeroes": true, 00:10:15.200 "zcopy": true, 00:10:15.200 "get_zone_info": false, 00:10:15.200 "zone_management": false, 00:10:15.200 "zone_append": false, 00:10:15.200 "compare": false, 00:10:15.200 "compare_and_write": false, 00:10:15.200 "abort": true, 00:10:15.200 "seek_hole": false, 00:10:15.200 "seek_data": false, 00:10:15.200 "copy": true, 00:10:15.200 "nvme_iov_md": false 00:10:15.200 }, 00:10:15.200 "memory_domains": [ 00:10:15.200 { 00:10:15.200 "dma_device_id": "system", 00:10:15.200 "dma_device_type": 1 00:10:15.200 }, 00:10:15.200 { 00:10:15.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.200 "dma_device_type": 2 00:10:15.200 } 00:10:15.200 ], 00:10:15.200 "driver_specific": {} 00:10:15.200 } 00:10:15.200 ] 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.200 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.201 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.201 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.201 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.201 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.201 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.201 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.201 "name": "Existed_Raid", 00:10:15.201 "uuid": "8a0cdcd5-e716-46bd-a95c-a7b71a1dc8e1", 00:10:15.201 "strip_size_kb": 0, 00:10:15.201 "state": "online", 00:10:15.201 "raid_level": "raid1", 00:10:15.201 "superblock": true, 00:10:15.201 "num_base_bdevs": 3, 00:10:15.201 "num_base_bdevs_discovered": 3, 00:10:15.201 "num_base_bdevs_operational": 3, 00:10:15.201 "base_bdevs_list": [ 00:10:15.201 { 00:10:15.201 "name": "BaseBdev1", 00:10:15.201 "uuid": "d747dd4c-eb87-43d8-b765-5ec4e90943e5", 00:10:15.201 "is_configured": true, 00:10:15.201 "data_offset": 2048, 00:10:15.201 "data_size": 63488 00:10:15.201 }, 00:10:15.201 { 00:10:15.201 "name": "BaseBdev2", 00:10:15.201 "uuid": "17f8922f-ac02-4eb3-9ec0-1d958f06d6fa", 00:10:15.201 "is_configured": true, 00:10:15.201 "data_offset": 2048, 00:10:15.201 "data_size": 63488 00:10:15.201 }, 00:10:15.201 { 00:10:15.201 "name": "BaseBdev3", 00:10:15.201 "uuid": "c758a8f9-1aa2-407b-bdae-cf948d869312", 00:10:15.201 "is_configured": true, 00:10:15.201 "data_offset": 2048, 00:10:15.201 "data_size": 63488 00:10:15.201 } 00:10:15.201 ] 00:10:15.201 }' 00:10:15.201 09:54:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.201 09:54:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.460 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.460 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.460 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.460 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.460 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.460 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.460 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.460 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.460 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.460 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.719 [2024-10-21 09:54:52.057901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.719 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.719 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.719 "name": "Existed_Raid", 00:10:15.719 "aliases": [ 00:10:15.719 "8a0cdcd5-e716-46bd-a95c-a7b71a1dc8e1" 00:10:15.719 ], 00:10:15.719 "product_name": "Raid Volume", 00:10:15.719 "block_size": 512, 00:10:15.719 "num_blocks": 63488, 00:10:15.719 "uuid": "8a0cdcd5-e716-46bd-a95c-a7b71a1dc8e1", 00:10:15.719 "assigned_rate_limits": { 00:10:15.719 "rw_ios_per_sec": 0, 00:10:15.719 "rw_mbytes_per_sec": 0, 00:10:15.719 "r_mbytes_per_sec": 0, 00:10:15.719 "w_mbytes_per_sec": 0 00:10:15.719 }, 00:10:15.719 "claimed": false, 00:10:15.719 "zoned": false, 00:10:15.720 "supported_io_types": { 00:10:15.720 "read": true, 00:10:15.720 "write": true, 00:10:15.720 "unmap": false, 00:10:15.720 "flush": false, 00:10:15.720 "reset": true, 00:10:15.720 "nvme_admin": false, 00:10:15.720 "nvme_io": false, 00:10:15.720 "nvme_io_md": false, 00:10:15.720 "write_zeroes": true, 00:10:15.720 "zcopy": false, 00:10:15.720 "get_zone_info": false, 00:10:15.720 "zone_management": false, 00:10:15.720 "zone_append": false, 00:10:15.720 "compare": false, 00:10:15.720 "compare_and_write": false, 00:10:15.720 "abort": false, 00:10:15.720 "seek_hole": false, 00:10:15.720 "seek_data": false, 00:10:15.720 "copy": false, 00:10:15.720 "nvme_iov_md": false 00:10:15.720 }, 00:10:15.720 "memory_domains": [ 00:10:15.720 { 00:10:15.720 "dma_device_id": "system", 00:10:15.720 "dma_device_type": 1 00:10:15.720 }, 00:10:15.720 { 00:10:15.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.720 "dma_device_type": 2 00:10:15.720 }, 00:10:15.720 { 00:10:15.720 "dma_device_id": "system", 00:10:15.720 "dma_device_type": 1 00:10:15.720 }, 00:10:15.720 { 00:10:15.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.720 "dma_device_type": 2 00:10:15.720 }, 00:10:15.720 { 00:10:15.720 "dma_device_id": "system", 00:10:15.720 "dma_device_type": 1 00:10:15.720 }, 00:10:15.720 { 00:10:15.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.720 "dma_device_type": 2 00:10:15.720 } 00:10:15.720 ], 00:10:15.720 "driver_specific": { 00:10:15.720 "raid": { 00:10:15.720 "uuid": "8a0cdcd5-e716-46bd-a95c-a7b71a1dc8e1", 00:10:15.720 "strip_size_kb": 0, 00:10:15.720 "state": "online", 00:10:15.720 "raid_level": "raid1", 00:10:15.720 "superblock": true, 00:10:15.720 "num_base_bdevs": 3, 00:10:15.720 "num_base_bdevs_discovered": 3, 00:10:15.720 "num_base_bdevs_operational": 3, 00:10:15.720 "base_bdevs_list": [ 00:10:15.720 { 00:10:15.720 "name": "BaseBdev1", 00:10:15.720 "uuid": "d747dd4c-eb87-43d8-b765-5ec4e90943e5", 00:10:15.720 "is_configured": true, 00:10:15.720 "data_offset": 2048, 00:10:15.720 "data_size": 63488 00:10:15.720 }, 00:10:15.720 { 00:10:15.720 "name": "BaseBdev2", 00:10:15.720 "uuid": "17f8922f-ac02-4eb3-9ec0-1d958f06d6fa", 00:10:15.720 "is_configured": true, 00:10:15.720 "data_offset": 2048, 00:10:15.720 "data_size": 63488 00:10:15.720 }, 00:10:15.720 { 00:10:15.720 "name": "BaseBdev3", 00:10:15.720 "uuid": "c758a8f9-1aa2-407b-bdae-cf948d869312", 00:10:15.720 "is_configured": true, 00:10:15.720 "data_offset": 2048, 00:10:15.720 "data_size": 63488 00:10:15.720 } 00:10:15.720 ] 00:10:15.720 } 00:10:15.720 } 00:10:15.720 }' 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.720 BaseBdev2 00:10:15.720 BaseBdev3' 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.720 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.980 [2024-10-21 09:54:52.357042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.980 "name": "Existed_Raid", 00:10:15.980 "uuid": "8a0cdcd5-e716-46bd-a95c-a7b71a1dc8e1", 00:10:15.980 "strip_size_kb": 0, 00:10:15.980 "state": "online", 00:10:15.980 "raid_level": "raid1", 00:10:15.980 "superblock": true, 00:10:15.980 "num_base_bdevs": 3, 00:10:15.980 "num_base_bdevs_discovered": 2, 00:10:15.980 "num_base_bdevs_operational": 2, 00:10:15.980 "base_bdevs_list": [ 00:10:15.980 { 00:10:15.980 "name": null, 00:10:15.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.980 "is_configured": false, 00:10:15.980 "data_offset": 0, 00:10:15.980 "data_size": 63488 00:10:15.980 }, 00:10:15.980 { 00:10:15.980 "name": "BaseBdev2", 00:10:15.980 "uuid": "17f8922f-ac02-4eb3-9ec0-1d958f06d6fa", 00:10:15.980 "is_configured": true, 00:10:15.980 "data_offset": 2048, 00:10:15.980 "data_size": 63488 00:10:15.980 }, 00:10:15.980 { 00:10:15.980 "name": "BaseBdev3", 00:10:15.980 "uuid": "c758a8f9-1aa2-407b-bdae-cf948d869312", 00:10:15.980 "is_configured": true, 00:10:15.980 "data_offset": 2048, 00:10:15.980 "data_size": 63488 00:10:15.980 } 00:10:15.980 ] 00:10:15.980 }' 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.980 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.549 09:54:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 [2024-10-21 09:54:52.945436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.549 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.549 [2024-10-21 09:54:53.101470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.549 [2024-10-21 09:54:53.101655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.809 [2024-10-21 09:54:53.196822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.809 [2024-10-21 09:54:53.196893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.809 [2024-10-21 09:54:53.196911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.809 BaseBdev2 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.809 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.810 [ 00:10:16.810 { 00:10:16.810 "name": "BaseBdev2", 00:10:16.810 "aliases": [ 00:10:16.810 "b655bd49-edd8-45d7-85b6-b59da483530a" 00:10:16.810 ], 00:10:16.810 "product_name": "Malloc disk", 00:10:16.810 "block_size": 512, 00:10:16.810 "num_blocks": 65536, 00:10:16.810 "uuid": "b655bd49-edd8-45d7-85b6-b59da483530a", 00:10:16.810 "assigned_rate_limits": { 00:10:16.810 "rw_ios_per_sec": 0, 00:10:16.810 "rw_mbytes_per_sec": 0, 00:10:16.810 "r_mbytes_per_sec": 0, 00:10:16.810 "w_mbytes_per_sec": 0 00:10:16.810 }, 00:10:16.810 "claimed": false, 00:10:16.810 "zoned": false, 00:10:16.810 "supported_io_types": { 00:10:16.810 "read": true, 00:10:16.810 "write": true, 00:10:16.810 "unmap": true, 00:10:16.810 "flush": true, 00:10:16.810 "reset": true, 00:10:16.810 "nvme_admin": false, 00:10:16.810 "nvme_io": false, 00:10:16.810 "nvme_io_md": false, 00:10:16.810 "write_zeroes": true, 00:10:16.810 "zcopy": true, 00:10:16.810 "get_zone_info": false, 00:10:16.810 "zone_management": false, 00:10:16.810 "zone_append": false, 00:10:16.810 "compare": false, 00:10:16.810 "compare_and_write": false, 00:10:16.810 "abort": true, 00:10:16.810 "seek_hole": false, 00:10:16.810 "seek_data": false, 00:10:16.810 "copy": true, 00:10:16.810 "nvme_iov_md": false 00:10:16.810 }, 00:10:16.810 "memory_domains": [ 00:10:16.810 { 00:10:16.810 "dma_device_id": "system", 00:10:16.810 "dma_device_type": 1 00:10:16.810 }, 00:10:16.810 { 00:10:16.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.810 "dma_device_type": 2 00:10:16.810 } 00:10:16.810 ], 00:10:16.810 "driver_specific": {} 00:10:16.810 } 00:10:16.810 ] 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.810 BaseBdev3 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.810 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.069 [ 00:10:17.069 { 00:10:17.069 "name": "BaseBdev3", 00:10:17.069 "aliases": [ 00:10:17.069 "eb4a48e4-78ab-49a4-b58d-62069d99f1df" 00:10:17.069 ], 00:10:17.070 "product_name": "Malloc disk", 00:10:17.070 "block_size": 512, 00:10:17.070 "num_blocks": 65536, 00:10:17.070 "uuid": "eb4a48e4-78ab-49a4-b58d-62069d99f1df", 00:10:17.070 "assigned_rate_limits": { 00:10:17.070 "rw_ios_per_sec": 0, 00:10:17.070 "rw_mbytes_per_sec": 0, 00:10:17.070 "r_mbytes_per_sec": 0, 00:10:17.070 "w_mbytes_per_sec": 0 00:10:17.070 }, 00:10:17.070 "claimed": false, 00:10:17.070 "zoned": false, 00:10:17.070 "supported_io_types": { 00:10:17.070 "read": true, 00:10:17.070 "write": true, 00:10:17.070 "unmap": true, 00:10:17.070 "flush": true, 00:10:17.070 "reset": true, 00:10:17.070 "nvme_admin": false, 00:10:17.070 "nvme_io": false, 00:10:17.070 "nvme_io_md": false, 00:10:17.070 "write_zeroes": true, 00:10:17.070 "zcopy": true, 00:10:17.070 "get_zone_info": false, 00:10:17.070 "zone_management": false, 00:10:17.070 "zone_append": false, 00:10:17.070 "compare": false, 00:10:17.070 "compare_and_write": false, 00:10:17.070 "abort": true, 00:10:17.070 "seek_hole": false, 00:10:17.070 "seek_data": false, 00:10:17.070 "copy": true, 00:10:17.070 "nvme_iov_md": false 00:10:17.070 }, 00:10:17.070 "memory_domains": [ 00:10:17.070 { 00:10:17.070 "dma_device_id": "system", 00:10:17.070 "dma_device_type": 1 00:10:17.070 }, 00:10:17.070 { 00:10:17.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.070 "dma_device_type": 2 00:10:17.070 } 00:10:17.070 ], 00:10:17.070 "driver_specific": {} 00:10:17.070 } 00:10:17.070 ] 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.070 [2024-10-21 09:54:53.423180] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.070 [2024-10-21 09:54:53.423287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.070 [2024-10-21 09:54:53.423315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.070 [2024-10-21 09:54:53.425324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.070 "name": "Existed_Raid", 00:10:17.070 "uuid": "5d6da6c3-1662-4117-9fd2-0e9243c0e060", 00:10:17.070 "strip_size_kb": 0, 00:10:17.070 "state": "configuring", 00:10:17.070 "raid_level": "raid1", 00:10:17.070 "superblock": true, 00:10:17.070 "num_base_bdevs": 3, 00:10:17.070 "num_base_bdevs_discovered": 2, 00:10:17.070 "num_base_bdevs_operational": 3, 00:10:17.070 "base_bdevs_list": [ 00:10:17.070 { 00:10:17.070 "name": "BaseBdev1", 00:10:17.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.070 "is_configured": false, 00:10:17.070 "data_offset": 0, 00:10:17.070 "data_size": 0 00:10:17.070 }, 00:10:17.070 { 00:10:17.070 "name": "BaseBdev2", 00:10:17.070 "uuid": "b655bd49-edd8-45d7-85b6-b59da483530a", 00:10:17.070 "is_configured": true, 00:10:17.070 "data_offset": 2048, 00:10:17.070 "data_size": 63488 00:10:17.070 }, 00:10:17.070 { 00:10:17.070 "name": "BaseBdev3", 00:10:17.070 "uuid": "eb4a48e4-78ab-49a4-b58d-62069d99f1df", 00:10:17.070 "is_configured": true, 00:10:17.070 "data_offset": 2048, 00:10:17.070 "data_size": 63488 00:10:17.070 } 00:10:17.070 ] 00:10:17.070 }' 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.070 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.330 [2024-10-21 09:54:53.874496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.330 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.589 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.589 "name": "Existed_Raid", 00:10:17.589 "uuid": "5d6da6c3-1662-4117-9fd2-0e9243c0e060", 00:10:17.589 "strip_size_kb": 0, 00:10:17.589 "state": "configuring", 00:10:17.589 "raid_level": "raid1", 00:10:17.589 "superblock": true, 00:10:17.589 "num_base_bdevs": 3, 00:10:17.589 "num_base_bdevs_discovered": 1, 00:10:17.589 "num_base_bdevs_operational": 3, 00:10:17.589 "base_bdevs_list": [ 00:10:17.589 { 00:10:17.589 "name": "BaseBdev1", 00:10:17.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.589 "is_configured": false, 00:10:17.589 "data_offset": 0, 00:10:17.589 "data_size": 0 00:10:17.589 }, 00:10:17.589 { 00:10:17.589 "name": null, 00:10:17.589 "uuid": "b655bd49-edd8-45d7-85b6-b59da483530a", 00:10:17.589 "is_configured": false, 00:10:17.589 "data_offset": 0, 00:10:17.589 "data_size": 63488 00:10:17.589 }, 00:10:17.589 { 00:10:17.589 "name": "BaseBdev3", 00:10:17.589 "uuid": "eb4a48e4-78ab-49a4-b58d-62069d99f1df", 00:10:17.589 "is_configured": true, 00:10:17.589 "data_offset": 2048, 00:10:17.589 "data_size": 63488 00:10:17.589 } 00:10:17.589 ] 00:10:17.589 }' 00:10:17.589 09:54:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.589 09:54:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.849 [2024-10-21 09:54:54.396891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.849 BaseBdev1 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.849 [ 00:10:17.849 { 00:10:17.849 "name": "BaseBdev1", 00:10:17.849 "aliases": [ 00:10:17.849 "96ef7c34-e37b-449a-8986-6967ccde3a82" 00:10:17.849 ], 00:10:17.849 "product_name": "Malloc disk", 00:10:17.849 "block_size": 512, 00:10:17.849 "num_blocks": 65536, 00:10:17.849 "uuid": "96ef7c34-e37b-449a-8986-6967ccde3a82", 00:10:17.849 "assigned_rate_limits": { 00:10:17.849 "rw_ios_per_sec": 0, 00:10:17.849 "rw_mbytes_per_sec": 0, 00:10:17.849 "r_mbytes_per_sec": 0, 00:10:17.849 "w_mbytes_per_sec": 0 00:10:17.849 }, 00:10:17.849 "claimed": true, 00:10:17.849 "claim_type": "exclusive_write", 00:10:17.849 "zoned": false, 00:10:17.849 "supported_io_types": { 00:10:17.849 "read": true, 00:10:17.849 "write": true, 00:10:17.849 "unmap": true, 00:10:17.849 "flush": true, 00:10:17.849 "reset": true, 00:10:17.849 "nvme_admin": false, 00:10:17.849 "nvme_io": false, 00:10:17.849 "nvme_io_md": false, 00:10:17.849 "write_zeroes": true, 00:10:17.849 "zcopy": true, 00:10:17.849 "get_zone_info": false, 00:10:17.849 "zone_management": false, 00:10:17.849 "zone_append": false, 00:10:17.849 "compare": false, 00:10:17.849 "compare_and_write": false, 00:10:17.849 "abort": true, 00:10:17.849 "seek_hole": false, 00:10:17.849 "seek_data": false, 00:10:17.849 "copy": true, 00:10:17.849 "nvme_iov_md": false 00:10:17.849 }, 00:10:17.849 "memory_domains": [ 00:10:17.849 { 00:10:17.849 "dma_device_id": "system", 00:10:17.849 "dma_device_type": 1 00:10:17.849 }, 00:10:17.849 { 00:10:17.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.849 "dma_device_type": 2 00:10:17.849 } 00:10:17.849 ], 00:10:17.849 "driver_specific": {} 00:10:17.849 } 00:10:17.849 ] 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.849 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.109 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.109 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.109 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.109 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.109 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.109 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.109 "name": "Existed_Raid", 00:10:18.109 "uuid": "5d6da6c3-1662-4117-9fd2-0e9243c0e060", 00:10:18.109 "strip_size_kb": 0, 00:10:18.109 "state": "configuring", 00:10:18.109 "raid_level": "raid1", 00:10:18.109 "superblock": true, 00:10:18.109 "num_base_bdevs": 3, 00:10:18.109 "num_base_bdevs_discovered": 2, 00:10:18.109 "num_base_bdevs_operational": 3, 00:10:18.109 "base_bdevs_list": [ 00:10:18.109 { 00:10:18.109 "name": "BaseBdev1", 00:10:18.109 "uuid": "96ef7c34-e37b-449a-8986-6967ccde3a82", 00:10:18.109 "is_configured": true, 00:10:18.109 "data_offset": 2048, 00:10:18.109 "data_size": 63488 00:10:18.109 }, 00:10:18.109 { 00:10:18.109 "name": null, 00:10:18.109 "uuid": "b655bd49-edd8-45d7-85b6-b59da483530a", 00:10:18.109 "is_configured": false, 00:10:18.109 "data_offset": 0, 00:10:18.109 "data_size": 63488 00:10:18.109 }, 00:10:18.109 { 00:10:18.109 "name": "BaseBdev3", 00:10:18.109 "uuid": "eb4a48e4-78ab-49a4-b58d-62069d99f1df", 00:10:18.109 "is_configured": true, 00:10:18.109 "data_offset": 2048, 00:10:18.109 "data_size": 63488 00:10:18.109 } 00:10:18.109 ] 00:10:18.109 }' 00:10:18.109 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.109 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.368 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.368 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.368 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.368 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.368 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.368 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.369 [2024-10-21 09:54:54.908068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.369 "name": "Existed_Raid", 00:10:18.369 "uuid": "5d6da6c3-1662-4117-9fd2-0e9243c0e060", 00:10:18.369 "strip_size_kb": 0, 00:10:18.369 "state": "configuring", 00:10:18.369 "raid_level": "raid1", 00:10:18.369 "superblock": true, 00:10:18.369 "num_base_bdevs": 3, 00:10:18.369 "num_base_bdevs_discovered": 1, 00:10:18.369 "num_base_bdevs_operational": 3, 00:10:18.369 "base_bdevs_list": [ 00:10:18.369 { 00:10:18.369 "name": "BaseBdev1", 00:10:18.369 "uuid": "96ef7c34-e37b-449a-8986-6967ccde3a82", 00:10:18.369 "is_configured": true, 00:10:18.369 "data_offset": 2048, 00:10:18.369 "data_size": 63488 00:10:18.369 }, 00:10:18.369 { 00:10:18.369 "name": null, 00:10:18.369 "uuid": "b655bd49-edd8-45d7-85b6-b59da483530a", 00:10:18.369 "is_configured": false, 00:10:18.369 "data_offset": 0, 00:10:18.369 "data_size": 63488 00:10:18.369 }, 00:10:18.369 { 00:10:18.369 "name": null, 00:10:18.369 "uuid": "eb4a48e4-78ab-49a4-b58d-62069d99f1df", 00:10:18.369 "is_configured": false, 00:10:18.369 "data_offset": 0, 00:10:18.369 "data_size": 63488 00:10:18.369 } 00:10:18.369 ] 00:10:18.369 }' 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.369 09:54:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.937 [2024-10-21 09:54:55.435214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.937 "name": "Existed_Raid", 00:10:18.937 "uuid": "5d6da6c3-1662-4117-9fd2-0e9243c0e060", 00:10:18.937 "strip_size_kb": 0, 00:10:18.937 "state": "configuring", 00:10:18.937 "raid_level": "raid1", 00:10:18.937 "superblock": true, 00:10:18.937 "num_base_bdevs": 3, 00:10:18.937 "num_base_bdevs_discovered": 2, 00:10:18.937 "num_base_bdevs_operational": 3, 00:10:18.937 "base_bdevs_list": [ 00:10:18.937 { 00:10:18.937 "name": "BaseBdev1", 00:10:18.937 "uuid": "96ef7c34-e37b-449a-8986-6967ccde3a82", 00:10:18.937 "is_configured": true, 00:10:18.937 "data_offset": 2048, 00:10:18.937 "data_size": 63488 00:10:18.937 }, 00:10:18.937 { 00:10:18.937 "name": null, 00:10:18.937 "uuid": "b655bd49-edd8-45d7-85b6-b59da483530a", 00:10:18.937 "is_configured": false, 00:10:18.937 "data_offset": 0, 00:10:18.937 "data_size": 63488 00:10:18.937 }, 00:10:18.937 { 00:10:18.937 "name": "BaseBdev3", 00:10:18.937 "uuid": "eb4a48e4-78ab-49a4-b58d-62069d99f1df", 00:10:18.937 "is_configured": true, 00:10:18.937 "data_offset": 2048, 00:10:18.937 "data_size": 63488 00:10:18.937 } 00:10:18.937 ] 00:10:18.937 }' 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.937 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.506 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.507 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.507 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.507 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.507 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.507 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:19.507 09:54:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.507 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.507 09:54:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.507 [2024-10-21 09:54:55.930371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.507 "name": "Existed_Raid", 00:10:19.507 "uuid": "5d6da6c3-1662-4117-9fd2-0e9243c0e060", 00:10:19.507 "strip_size_kb": 0, 00:10:19.507 "state": "configuring", 00:10:19.507 "raid_level": "raid1", 00:10:19.507 "superblock": true, 00:10:19.507 "num_base_bdevs": 3, 00:10:19.507 "num_base_bdevs_discovered": 1, 00:10:19.507 "num_base_bdevs_operational": 3, 00:10:19.507 "base_bdevs_list": [ 00:10:19.507 { 00:10:19.507 "name": null, 00:10:19.507 "uuid": "96ef7c34-e37b-449a-8986-6967ccde3a82", 00:10:19.507 "is_configured": false, 00:10:19.507 "data_offset": 0, 00:10:19.507 "data_size": 63488 00:10:19.507 }, 00:10:19.507 { 00:10:19.507 "name": null, 00:10:19.507 "uuid": "b655bd49-edd8-45d7-85b6-b59da483530a", 00:10:19.507 "is_configured": false, 00:10:19.507 "data_offset": 0, 00:10:19.507 "data_size": 63488 00:10:19.507 }, 00:10:19.507 { 00:10:19.507 "name": "BaseBdev3", 00:10:19.507 "uuid": "eb4a48e4-78ab-49a4-b58d-62069d99f1df", 00:10:19.507 "is_configured": true, 00:10:19.507 "data_offset": 2048, 00:10:19.507 "data_size": 63488 00:10:19.507 } 00:10:19.507 ] 00:10:19.507 }' 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.507 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.077 [2024-10-21 09:54:56.545111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.077 "name": "Existed_Raid", 00:10:20.077 "uuid": "5d6da6c3-1662-4117-9fd2-0e9243c0e060", 00:10:20.077 "strip_size_kb": 0, 00:10:20.077 "state": "configuring", 00:10:20.077 "raid_level": "raid1", 00:10:20.077 "superblock": true, 00:10:20.077 "num_base_bdevs": 3, 00:10:20.077 "num_base_bdevs_discovered": 2, 00:10:20.077 "num_base_bdevs_operational": 3, 00:10:20.077 "base_bdevs_list": [ 00:10:20.077 { 00:10:20.077 "name": null, 00:10:20.077 "uuid": "96ef7c34-e37b-449a-8986-6967ccde3a82", 00:10:20.077 "is_configured": false, 00:10:20.077 "data_offset": 0, 00:10:20.077 "data_size": 63488 00:10:20.077 }, 00:10:20.077 { 00:10:20.077 "name": "BaseBdev2", 00:10:20.077 "uuid": "b655bd49-edd8-45d7-85b6-b59da483530a", 00:10:20.077 "is_configured": true, 00:10:20.077 "data_offset": 2048, 00:10:20.077 "data_size": 63488 00:10:20.077 }, 00:10:20.077 { 00:10:20.077 "name": "BaseBdev3", 00:10:20.077 "uuid": "eb4a48e4-78ab-49a4-b58d-62069d99f1df", 00:10:20.077 "is_configured": true, 00:10:20.077 "data_offset": 2048, 00:10:20.077 "data_size": 63488 00:10:20.077 } 00:10:20.077 ] 00:10:20.077 }' 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.077 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.646 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.646 09:54:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.646 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.646 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.646 09:54:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 96ef7c34-e37b-449a-8986-6967ccde3a82 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.646 [2024-10-21 09:54:57.109160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:20.646 [2024-10-21 09:54:57.109522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:10:20.646 NewBaseBdev 00:10:20.646 [2024-10-21 09:54:57.109602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.646 [2024-10-21 09:54:57.109889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:20.646 [2024-10-21 09:54:57.110059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:10:20.646 [2024-10-21 09:54:57.110074] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:10:20.646 [2024-10-21 09:54:57.110224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.646 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.646 [ 00:10:20.646 { 00:10:20.646 "name": "NewBaseBdev", 00:10:20.646 "aliases": [ 00:10:20.646 "96ef7c34-e37b-449a-8986-6967ccde3a82" 00:10:20.646 ], 00:10:20.646 "product_name": "Malloc disk", 00:10:20.646 "block_size": 512, 00:10:20.646 "num_blocks": 65536, 00:10:20.646 "uuid": "96ef7c34-e37b-449a-8986-6967ccde3a82", 00:10:20.646 "assigned_rate_limits": { 00:10:20.646 "rw_ios_per_sec": 0, 00:10:20.646 "rw_mbytes_per_sec": 0, 00:10:20.646 "r_mbytes_per_sec": 0, 00:10:20.646 "w_mbytes_per_sec": 0 00:10:20.646 }, 00:10:20.646 "claimed": true, 00:10:20.646 "claim_type": "exclusive_write", 00:10:20.646 "zoned": false, 00:10:20.646 "supported_io_types": { 00:10:20.646 "read": true, 00:10:20.646 "write": true, 00:10:20.646 "unmap": true, 00:10:20.646 "flush": true, 00:10:20.646 "reset": true, 00:10:20.646 "nvme_admin": false, 00:10:20.646 "nvme_io": false, 00:10:20.646 "nvme_io_md": false, 00:10:20.646 "write_zeroes": true, 00:10:20.646 "zcopy": true, 00:10:20.646 "get_zone_info": false, 00:10:20.646 "zone_management": false, 00:10:20.646 "zone_append": false, 00:10:20.646 "compare": false, 00:10:20.646 "compare_and_write": false, 00:10:20.647 "abort": true, 00:10:20.647 "seek_hole": false, 00:10:20.647 "seek_data": false, 00:10:20.647 "copy": true, 00:10:20.647 "nvme_iov_md": false 00:10:20.647 }, 00:10:20.647 "memory_domains": [ 00:10:20.647 { 00:10:20.647 "dma_device_id": "system", 00:10:20.647 "dma_device_type": 1 00:10:20.647 }, 00:10:20.647 { 00:10:20.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.647 "dma_device_type": 2 00:10:20.647 } 00:10:20.647 ], 00:10:20.647 "driver_specific": {} 00:10:20.647 } 00:10:20.647 ] 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.647 "name": "Existed_Raid", 00:10:20.647 "uuid": "5d6da6c3-1662-4117-9fd2-0e9243c0e060", 00:10:20.647 "strip_size_kb": 0, 00:10:20.647 "state": "online", 00:10:20.647 "raid_level": "raid1", 00:10:20.647 "superblock": true, 00:10:20.647 "num_base_bdevs": 3, 00:10:20.647 "num_base_bdevs_discovered": 3, 00:10:20.647 "num_base_bdevs_operational": 3, 00:10:20.647 "base_bdevs_list": [ 00:10:20.647 { 00:10:20.647 "name": "NewBaseBdev", 00:10:20.647 "uuid": "96ef7c34-e37b-449a-8986-6967ccde3a82", 00:10:20.647 "is_configured": true, 00:10:20.647 "data_offset": 2048, 00:10:20.647 "data_size": 63488 00:10:20.647 }, 00:10:20.647 { 00:10:20.647 "name": "BaseBdev2", 00:10:20.647 "uuid": "b655bd49-edd8-45d7-85b6-b59da483530a", 00:10:20.647 "is_configured": true, 00:10:20.647 "data_offset": 2048, 00:10:20.647 "data_size": 63488 00:10:20.647 }, 00:10:20.647 { 00:10:20.647 "name": "BaseBdev3", 00:10:20.647 "uuid": "eb4a48e4-78ab-49a4-b58d-62069d99f1df", 00:10:20.647 "is_configured": true, 00:10:20.647 "data_offset": 2048, 00:10:20.647 "data_size": 63488 00:10:20.647 } 00:10:20.647 ] 00:10:20.647 }' 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.647 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 [2024-10-21 09:54:57.592717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.229 "name": "Existed_Raid", 00:10:21.229 "aliases": [ 00:10:21.229 "5d6da6c3-1662-4117-9fd2-0e9243c0e060" 00:10:21.229 ], 00:10:21.229 "product_name": "Raid Volume", 00:10:21.229 "block_size": 512, 00:10:21.229 "num_blocks": 63488, 00:10:21.229 "uuid": "5d6da6c3-1662-4117-9fd2-0e9243c0e060", 00:10:21.229 "assigned_rate_limits": { 00:10:21.229 "rw_ios_per_sec": 0, 00:10:21.229 "rw_mbytes_per_sec": 0, 00:10:21.229 "r_mbytes_per_sec": 0, 00:10:21.229 "w_mbytes_per_sec": 0 00:10:21.229 }, 00:10:21.229 "claimed": false, 00:10:21.229 "zoned": false, 00:10:21.229 "supported_io_types": { 00:10:21.229 "read": true, 00:10:21.229 "write": true, 00:10:21.229 "unmap": false, 00:10:21.229 "flush": false, 00:10:21.229 "reset": true, 00:10:21.229 "nvme_admin": false, 00:10:21.229 "nvme_io": false, 00:10:21.229 "nvme_io_md": false, 00:10:21.229 "write_zeroes": true, 00:10:21.229 "zcopy": false, 00:10:21.229 "get_zone_info": false, 00:10:21.229 "zone_management": false, 00:10:21.229 "zone_append": false, 00:10:21.229 "compare": false, 00:10:21.229 "compare_and_write": false, 00:10:21.229 "abort": false, 00:10:21.229 "seek_hole": false, 00:10:21.229 "seek_data": false, 00:10:21.229 "copy": false, 00:10:21.229 "nvme_iov_md": false 00:10:21.229 }, 00:10:21.229 "memory_domains": [ 00:10:21.229 { 00:10:21.229 "dma_device_id": "system", 00:10:21.229 "dma_device_type": 1 00:10:21.229 }, 00:10:21.229 { 00:10:21.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.229 "dma_device_type": 2 00:10:21.229 }, 00:10:21.229 { 00:10:21.229 "dma_device_id": "system", 00:10:21.229 "dma_device_type": 1 00:10:21.229 }, 00:10:21.229 { 00:10:21.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.229 "dma_device_type": 2 00:10:21.229 }, 00:10:21.229 { 00:10:21.229 "dma_device_id": "system", 00:10:21.229 "dma_device_type": 1 00:10:21.229 }, 00:10:21.229 { 00:10:21.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.229 "dma_device_type": 2 00:10:21.229 } 00:10:21.229 ], 00:10:21.229 "driver_specific": { 00:10:21.229 "raid": { 00:10:21.229 "uuid": "5d6da6c3-1662-4117-9fd2-0e9243c0e060", 00:10:21.229 "strip_size_kb": 0, 00:10:21.229 "state": "online", 00:10:21.229 "raid_level": "raid1", 00:10:21.229 "superblock": true, 00:10:21.229 "num_base_bdevs": 3, 00:10:21.229 "num_base_bdevs_discovered": 3, 00:10:21.229 "num_base_bdevs_operational": 3, 00:10:21.229 "base_bdevs_list": [ 00:10:21.229 { 00:10:21.229 "name": "NewBaseBdev", 00:10:21.229 "uuid": "96ef7c34-e37b-449a-8986-6967ccde3a82", 00:10:21.229 "is_configured": true, 00:10:21.229 "data_offset": 2048, 00:10:21.229 "data_size": 63488 00:10:21.229 }, 00:10:21.229 { 00:10:21.229 "name": "BaseBdev2", 00:10:21.229 "uuid": "b655bd49-edd8-45d7-85b6-b59da483530a", 00:10:21.229 "is_configured": true, 00:10:21.229 "data_offset": 2048, 00:10:21.229 "data_size": 63488 00:10:21.229 }, 00:10:21.229 { 00:10:21.229 "name": "BaseBdev3", 00:10:21.229 "uuid": "eb4a48e4-78ab-49a4-b58d-62069d99f1df", 00:10:21.229 "is_configured": true, 00:10:21.229 "data_offset": 2048, 00:10:21.229 "data_size": 63488 00:10:21.229 } 00:10:21.229 ] 00:10:21.229 } 00:10:21.229 } 00:10:21.229 }' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:21.229 BaseBdev2 00:10:21.229 BaseBdev3' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.229 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.489 [2024-10-21 09:54:57.839961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.489 [2024-10-21 09:54:57.839996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.489 [2024-10-21 09:54:57.840078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.489 [2024-10-21 09:54:57.840381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.489 [2024-10-21 09:54:57.840392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67597 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 67597 ']' 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 67597 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67597 00:10:21.489 killing process with pid 67597 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67597' 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 67597 00:10:21.489 [2024-10-21 09:54:57.894045] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.489 09:54:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 67597 00:10:21.749 [2024-10-21 09:54:58.198480] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.131 ************************************ 00:10:23.131 END TEST raid_state_function_test_sb 00:10:23.131 ************************************ 00:10:23.131 09:54:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:23.131 00:10:23.131 real 0m10.727s 00:10:23.131 user 0m17.078s 00:10:23.131 sys 0m1.884s 00:10:23.131 09:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.131 09:54:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.131 09:54:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:23.131 09:54:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:23.131 09:54:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.131 09:54:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.131 ************************************ 00:10:23.131 START TEST raid_superblock_test 00:10:23.131 ************************************ 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68220 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68220 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68220 ']' 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.131 09:54:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.131 [2024-10-21 09:54:59.508183] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:10:23.131 [2024-10-21 09:54:59.508355] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68220 ] 00:10:23.131 [2024-10-21 09:54:59.692885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.391 [2024-10-21 09:54:59.818851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.650 [2024-10-21 09:55:00.044127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.650 [2024-10-21 09:55:00.044185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.911 malloc1 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.911 [2024-10-21 09:55:00.388367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:23.911 [2024-10-21 09:55:00.388480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.911 [2024-10-21 09:55:00.388509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:10:23.911 [2024-10-21 09:55:00.388518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.911 [2024-10-21 09:55:00.390666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.911 [2024-10-21 09:55:00.390702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:23.911 pt1 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.911 malloc2 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.911 [2024-10-21 09:55:00.443107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.911 [2024-10-21 09:55:00.443203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.911 [2024-10-21 09:55:00.443229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:10:23.911 [2024-10-21 09:55:00.443238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.911 [2024-10-21 09:55:00.445573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.911 [2024-10-21 09:55:00.445626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.911 pt2 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.911 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.911 malloc3 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.171 [2024-10-21 09:55:00.512413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.171 [2024-10-21 09:55:00.512515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.171 [2024-10-21 09:55:00.512559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:24.171 [2024-10-21 09:55:00.512613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.171 [2024-10-21 09:55:00.514946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.171 [2024-10-21 09:55:00.515028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.171 pt3 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.171 [2024-10-21 09:55:00.524438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.171 [2024-10-21 09:55:00.526607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.171 [2024-10-21 09:55:00.526735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.171 [2024-10-21 09:55:00.526942] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:10:24.171 [2024-10-21 09:55:00.526959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.171 [2024-10-21 09:55:00.527242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:24.171 [2024-10-21 09:55:00.527437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:10:24.171 [2024-10-21 09:55:00.527449] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:10:24.171 [2024-10-21 09:55:00.527634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.171 "name": "raid_bdev1", 00:10:24.171 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:24.171 "strip_size_kb": 0, 00:10:24.171 "state": "online", 00:10:24.171 "raid_level": "raid1", 00:10:24.171 "superblock": true, 00:10:24.171 "num_base_bdevs": 3, 00:10:24.171 "num_base_bdevs_discovered": 3, 00:10:24.171 "num_base_bdevs_operational": 3, 00:10:24.171 "base_bdevs_list": [ 00:10:24.171 { 00:10:24.171 "name": "pt1", 00:10:24.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.171 "is_configured": true, 00:10:24.171 "data_offset": 2048, 00:10:24.171 "data_size": 63488 00:10:24.171 }, 00:10:24.171 { 00:10:24.171 "name": "pt2", 00:10:24.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.171 "is_configured": true, 00:10:24.171 "data_offset": 2048, 00:10:24.171 "data_size": 63488 00:10:24.171 }, 00:10:24.171 { 00:10:24.171 "name": "pt3", 00:10:24.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.171 "is_configured": true, 00:10:24.171 "data_offset": 2048, 00:10:24.171 "data_size": 63488 00:10:24.171 } 00:10:24.171 ] 00:10:24.171 }' 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.171 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.431 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:24.431 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:24.431 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.431 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.431 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.431 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.431 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.431 09:55:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.431 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.431 09:55:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.431 [2024-10-21 09:55:00.979992] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.431 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.431 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.431 "name": "raid_bdev1", 00:10:24.431 "aliases": [ 00:10:24.431 "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658" 00:10:24.431 ], 00:10:24.431 "product_name": "Raid Volume", 00:10:24.431 "block_size": 512, 00:10:24.431 "num_blocks": 63488, 00:10:24.431 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:24.431 "assigned_rate_limits": { 00:10:24.431 "rw_ios_per_sec": 0, 00:10:24.431 "rw_mbytes_per_sec": 0, 00:10:24.431 "r_mbytes_per_sec": 0, 00:10:24.431 "w_mbytes_per_sec": 0 00:10:24.431 }, 00:10:24.431 "claimed": false, 00:10:24.431 "zoned": false, 00:10:24.431 "supported_io_types": { 00:10:24.431 "read": true, 00:10:24.431 "write": true, 00:10:24.431 "unmap": false, 00:10:24.431 "flush": false, 00:10:24.431 "reset": true, 00:10:24.431 "nvme_admin": false, 00:10:24.431 "nvme_io": false, 00:10:24.431 "nvme_io_md": false, 00:10:24.431 "write_zeroes": true, 00:10:24.431 "zcopy": false, 00:10:24.431 "get_zone_info": false, 00:10:24.431 "zone_management": false, 00:10:24.431 "zone_append": false, 00:10:24.431 "compare": false, 00:10:24.431 "compare_and_write": false, 00:10:24.431 "abort": false, 00:10:24.431 "seek_hole": false, 00:10:24.431 "seek_data": false, 00:10:24.431 "copy": false, 00:10:24.431 "nvme_iov_md": false 00:10:24.431 }, 00:10:24.431 "memory_domains": [ 00:10:24.431 { 00:10:24.431 "dma_device_id": "system", 00:10:24.431 "dma_device_type": 1 00:10:24.431 }, 00:10:24.431 { 00:10:24.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.431 "dma_device_type": 2 00:10:24.431 }, 00:10:24.431 { 00:10:24.431 "dma_device_id": "system", 00:10:24.431 "dma_device_type": 1 00:10:24.431 }, 00:10:24.431 { 00:10:24.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.431 "dma_device_type": 2 00:10:24.431 }, 00:10:24.431 { 00:10:24.431 "dma_device_id": "system", 00:10:24.431 "dma_device_type": 1 00:10:24.431 }, 00:10:24.431 { 00:10:24.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.431 "dma_device_type": 2 00:10:24.431 } 00:10:24.431 ], 00:10:24.431 "driver_specific": { 00:10:24.431 "raid": { 00:10:24.431 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:24.431 "strip_size_kb": 0, 00:10:24.431 "state": "online", 00:10:24.431 "raid_level": "raid1", 00:10:24.431 "superblock": true, 00:10:24.431 "num_base_bdevs": 3, 00:10:24.431 "num_base_bdevs_discovered": 3, 00:10:24.431 "num_base_bdevs_operational": 3, 00:10:24.431 "base_bdevs_list": [ 00:10:24.431 { 00:10:24.431 "name": "pt1", 00:10:24.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.431 "is_configured": true, 00:10:24.431 "data_offset": 2048, 00:10:24.431 "data_size": 63488 00:10:24.431 }, 00:10:24.431 { 00:10:24.431 "name": "pt2", 00:10:24.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.431 "is_configured": true, 00:10:24.431 "data_offset": 2048, 00:10:24.431 "data_size": 63488 00:10:24.431 }, 00:10:24.431 { 00:10:24.431 "name": "pt3", 00:10:24.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.431 "is_configured": true, 00:10:24.431 "data_offset": 2048, 00:10:24.431 "data_size": 63488 00:10:24.431 } 00:10:24.431 ] 00:10:24.431 } 00:10:24.431 } 00:10:24.431 }' 00:10:24.431 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.691 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:24.691 pt2 00:10:24.691 pt3' 00:10:24.691 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.691 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.691 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.691 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:24.691 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.691 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.691 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.691 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:24.692 [2024-10-21 09:55:01.263427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.692 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bd2dfe2c-ce28-44e2-93e0-0f5bb774c658 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bd2dfe2c-ce28-44e2-93e0-0f5bb774c658 ']' 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.953 [2024-10-21 09:55:01.311083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.953 [2024-10-21 09:55:01.311163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.953 [2024-10-21 09:55:01.311297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.953 [2024-10-21 09:55:01.311423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.953 [2024-10-21 09:55:01.311474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.953 [2024-10-21 09:55:01.462873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:24.953 [2024-10-21 09:55:01.465020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:24.953 [2024-10-21 09:55:01.465079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:24.953 [2024-10-21 09:55:01.465149] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:24.953 [2024-10-21 09:55:01.465207] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:24.953 [2024-10-21 09:55:01.465229] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:24.953 [2024-10-21 09:55:01.465259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.953 [2024-10-21 09:55:01.465272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:10:24.953 request: 00:10:24.953 { 00:10:24.953 "name": "raid_bdev1", 00:10:24.953 "raid_level": "raid1", 00:10:24.953 "base_bdevs": [ 00:10:24.953 "malloc1", 00:10:24.953 "malloc2", 00:10:24.953 "malloc3" 00:10:24.953 ], 00:10:24.953 "superblock": false, 00:10:24.953 "method": "bdev_raid_create", 00:10:24.953 "req_id": 1 00:10:24.953 } 00:10:24.953 Got JSON-RPC error response 00:10:24.953 response: 00:10:24.953 { 00:10:24.953 "code": -17, 00:10:24.953 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:24.953 } 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.953 [2024-10-21 09:55:01.510741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:24.953 [2024-10-21 09:55:01.510868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.953 [2024-10-21 09:55:01.510946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:24.953 [2024-10-21 09:55:01.510982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.953 [2024-10-21 09:55:01.513269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.953 [2024-10-21 09:55:01.513342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:24.953 [2024-10-21 09:55:01.513454] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:24.953 [2024-10-21 09:55:01.513533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.953 pt1 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.953 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.954 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.214 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.214 "name": "raid_bdev1", 00:10:25.214 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:25.214 "strip_size_kb": 0, 00:10:25.214 "state": "configuring", 00:10:25.214 "raid_level": "raid1", 00:10:25.214 "superblock": true, 00:10:25.214 "num_base_bdevs": 3, 00:10:25.214 "num_base_bdevs_discovered": 1, 00:10:25.214 "num_base_bdevs_operational": 3, 00:10:25.214 "base_bdevs_list": [ 00:10:25.214 { 00:10:25.214 "name": "pt1", 00:10:25.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.214 "is_configured": true, 00:10:25.214 "data_offset": 2048, 00:10:25.214 "data_size": 63488 00:10:25.214 }, 00:10:25.214 { 00:10:25.214 "name": null, 00:10:25.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.214 "is_configured": false, 00:10:25.214 "data_offset": 2048, 00:10:25.214 "data_size": 63488 00:10:25.214 }, 00:10:25.214 { 00:10:25.214 "name": null, 00:10:25.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.214 "is_configured": false, 00:10:25.214 "data_offset": 2048, 00:10:25.214 "data_size": 63488 00:10:25.214 } 00:10:25.214 ] 00:10:25.214 }' 00:10:25.214 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.214 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.474 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:25.474 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.474 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.474 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.474 [2024-10-21 09:55:01.986019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.474 [2024-10-21 09:55:01.986143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.474 [2024-10-21 09:55:01.986185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:25.474 [2024-10-21 09:55:01.986214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.474 [2024-10-21 09:55:01.986772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.474 [2024-10-21 09:55:01.986836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.474 [2024-10-21 09:55:01.986967] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.474 [2024-10-21 09:55:01.987023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.474 pt2 00:10:25.474 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.474 09:55:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.474 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.474 09:55:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.474 [2024-10-21 09:55:01.998019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.474 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.474 "name": "raid_bdev1", 00:10:25.474 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:25.474 "strip_size_kb": 0, 00:10:25.474 "state": "configuring", 00:10:25.474 "raid_level": "raid1", 00:10:25.474 "superblock": true, 00:10:25.474 "num_base_bdevs": 3, 00:10:25.474 "num_base_bdevs_discovered": 1, 00:10:25.474 "num_base_bdevs_operational": 3, 00:10:25.474 "base_bdevs_list": [ 00:10:25.474 { 00:10:25.474 "name": "pt1", 00:10:25.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.474 "is_configured": true, 00:10:25.474 "data_offset": 2048, 00:10:25.474 "data_size": 63488 00:10:25.474 }, 00:10:25.474 { 00:10:25.474 "name": null, 00:10:25.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.474 "is_configured": false, 00:10:25.474 "data_offset": 0, 00:10:25.474 "data_size": 63488 00:10:25.474 }, 00:10:25.474 { 00:10:25.474 "name": null, 00:10:25.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.475 "is_configured": false, 00:10:25.475 "data_offset": 2048, 00:10:25.475 "data_size": 63488 00:10:25.475 } 00:10:25.475 ] 00:10:25.475 }' 00:10:25.475 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.475 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.044 [2024-10-21 09:55:02.457193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.044 [2024-10-21 09:55:02.457350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.044 [2024-10-21 09:55:02.457390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:26.044 [2024-10-21 09:55:02.457435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.044 [2024-10-21 09:55:02.457939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.044 [2024-10-21 09:55:02.458022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.044 [2024-10-21 09:55:02.458149] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:26.044 [2024-10-21 09:55:02.458210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.044 pt2 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.044 [2024-10-21 09:55:02.469176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:26.044 [2024-10-21 09:55:02.469232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.044 [2024-10-21 09:55:02.469254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:26.044 [2024-10-21 09:55:02.469267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.044 [2024-10-21 09:55:02.469730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.044 [2024-10-21 09:55:02.469756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:26.044 [2024-10-21 09:55:02.469835] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:26.044 [2024-10-21 09:55:02.469861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:26.044 [2024-10-21 09:55:02.470018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:26.044 [2024-10-21 09:55:02.470039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.044 [2024-10-21 09:55:02.470330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:26.044 [2024-10-21 09:55:02.470521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:26.044 [2024-10-21 09:55:02.470534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:26.044 [2024-10-21 09:55:02.470724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.044 pt3 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.044 "name": "raid_bdev1", 00:10:26.044 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:26.044 "strip_size_kb": 0, 00:10:26.044 "state": "online", 00:10:26.044 "raid_level": "raid1", 00:10:26.044 "superblock": true, 00:10:26.044 "num_base_bdevs": 3, 00:10:26.044 "num_base_bdevs_discovered": 3, 00:10:26.044 "num_base_bdevs_operational": 3, 00:10:26.044 "base_bdevs_list": [ 00:10:26.044 { 00:10:26.044 "name": "pt1", 00:10:26.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.044 "is_configured": true, 00:10:26.044 "data_offset": 2048, 00:10:26.044 "data_size": 63488 00:10:26.044 }, 00:10:26.044 { 00:10:26.044 "name": "pt2", 00:10:26.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.044 "is_configured": true, 00:10:26.044 "data_offset": 2048, 00:10:26.044 "data_size": 63488 00:10:26.044 }, 00:10:26.044 { 00:10:26.044 "name": "pt3", 00:10:26.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.044 "is_configured": true, 00:10:26.044 "data_offset": 2048, 00:10:26.044 "data_size": 63488 00:10:26.044 } 00:10:26.044 ] 00:10:26.044 }' 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.044 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.303 [2024-10-21 09:55:02.864926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.303 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.563 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.563 "name": "raid_bdev1", 00:10:26.563 "aliases": [ 00:10:26.563 "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658" 00:10:26.563 ], 00:10:26.563 "product_name": "Raid Volume", 00:10:26.563 "block_size": 512, 00:10:26.563 "num_blocks": 63488, 00:10:26.563 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:26.563 "assigned_rate_limits": { 00:10:26.563 "rw_ios_per_sec": 0, 00:10:26.563 "rw_mbytes_per_sec": 0, 00:10:26.563 "r_mbytes_per_sec": 0, 00:10:26.563 "w_mbytes_per_sec": 0 00:10:26.563 }, 00:10:26.563 "claimed": false, 00:10:26.563 "zoned": false, 00:10:26.563 "supported_io_types": { 00:10:26.563 "read": true, 00:10:26.563 "write": true, 00:10:26.563 "unmap": false, 00:10:26.563 "flush": false, 00:10:26.563 "reset": true, 00:10:26.563 "nvme_admin": false, 00:10:26.563 "nvme_io": false, 00:10:26.563 "nvme_io_md": false, 00:10:26.563 "write_zeroes": true, 00:10:26.563 "zcopy": false, 00:10:26.563 "get_zone_info": false, 00:10:26.563 "zone_management": false, 00:10:26.563 "zone_append": false, 00:10:26.563 "compare": false, 00:10:26.563 "compare_and_write": false, 00:10:26.563 "abort": false, 00:10:26.563 "seek_hole": false, 00:10:26.563 "seek_data": false, 00:10:26.563 "copy": false, 00:10:26.563 "nvme_iov_md": false 00:10:26.563 }, 00:10:26.563 "memory_domains": [ 00:10:26.563 { 00:10:26.563 "dma_device_id": "system", 00:10:26.563 "dma_device_type": 1 00:10:26.563 }, 00:10:26.563 { 00:10:26.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.563 "dma_device_type": 2 00:10:26.563 }, 00:10:26.563 { 00:10:26.563 "dma_device_id": "system", 00:10:26.563 "dma_device_type": 1 00:10:26.563 }, 00:10:26.563 { 00:10:26.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.563 "dma_device_type": 2 00:10:26.563 }, 00:10:26.563 { 00:10:26.563 "dma_device_id": "system", 00:10:26.563 "dma_device_type": 1 00:10:26.563 }, 00:10:26.563 { 00:10:26.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.563 "dma_device_type": 2 00:10:26.563 } 00:10:26.563 ], 00:10:26.563 "driver_specific": { 00:10:26.563 "raid": { 00:10:26.563 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:26.563 "strip_size_kb": 0, 00:10:26.563 "state": "online", 00:10:26.563 "raid_level": "raid1", 00:10:26.563 "superblock": true, 00:10:26.563 "num_base_bdevs": 3, 00:10:26.563 "num_base_bdevs_discovered": 3, 00:10:26.563 "num_base_bdevs_operational": 3, 00:10:26.563 "base_bdevs_list": [ 00:10:26.563 { 00:10:26.563 "name": "pt1", 00:10:26.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.563 "is_configured": true, 00:10:26.563 "data_offset": 2048, 00:10:26.563 "data_size": 63488 00:10:26.563 }, 00:10:26.563 { 00:10:26.563 "name": "pt2", 00:10:26.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.563 "is_configured": true, 00:10:26.563 "data_offset": 2048, 00:10:26.563 "data_size": 63488 00:10:26.563 }, 00:10:26.563 { 00:10:26.563 "name": "pt3", 00:10:26.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.564 "is_configured": true, 00:10:26.564 "data_offset": 2048, 00:10:26.564 "data_size": 63488 00:10:26.564 } 00:10:26.564 ] 00:10:26.564 } 00:10:26.564 } 00:10:26.564 }' 00:10:26.564 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.564 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:26.564 pt2 00:10:26.564 pt3' 00:10:26.564 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.564 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.564 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.564 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.564 09:55:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:26.564 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.564 09:55:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.564 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:26.564 [2024-10-21 09:55:03.144371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bd2dfe2c-ce28-44e2-93e0-0f5bb774c658 '!=' bd2dfe2c-ce28-44e2-93e0-0f5bb774c658 ']' 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.824 [2024-10-21 09:55:03.188069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.824 "name": "raid_bdev1", 00:10:26.824 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:26.824 "strip_size_kb": 0, 00:10:26.824 "state": "online", 00:10:26.824 "raid_level": "raid1", 00:10:26.824 "superblock": true, 00:10:26.824 "num_base_bdevs": 3, 00:10:26.824 "num_base_bdevs_discovered": 2, 00:10:26.824 "num_base_bdevs_operational": 2, 00:10:26.824 "base_bdevs_list": [ 00:10:26.824 { 00:10:26.824 "name": null, 00:10:26.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.824 "is_configured": false, 00:10:26.824 "data_offset": 0, 00:10:26.824 "data_size": 63488 00:10:26.824 }, 00:10:26.824 { 00:10:26.824 "name": "pt2", 00:10:26.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.824 "is_configured": true, 00:10:26.824 "data_offset": 2048, 00:10:26.824 "data_size": 63488 00:10:26.824 }, 00:10:26.824 { 00:10:26.824 "name": "pt3", 00:10:26.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.824 "is_configured": true, 00:10:26.824 "data_offset": 2048, 00:10:26.824 "data_size": 63488 00:10:26.824 } 00:10:26.824 ] 00:10:26.824 }' 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.824 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.084 [2024-10-21 09:55:03.619311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.084 [2024-10-21 09:55:03.619384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.084 [2024-10-21 09:55:03.619491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.084 [2024-10-21 09:55:03.619583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.084 [2024-10-21 09:55:03.619637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.084 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.344 [2024-10-21 09:55:03.687155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:27.344 [2024-10-21 09:55:03.687258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.344 [2024-10-21 09:55:03.687292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:27.344 [2024-10-21 09:55:03.687322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.344 [2024-10-21 09:55:03.689641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.344 [2024-10-21 09:55:03.689715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:27.344 [2024-10-21 09:55:03.689815] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:27.344 [2024-10-21 09:55:03.689895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:27.344 pt2 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.344 "name": "raid_bdev1", 00:10:27.344 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:27.344 "strip_size_kb": 0, 00:10:27.344 "state": "configuring", 00:10:27.344 "raid_level": "raid1", 00:10:27.344 "superblock": true, 00:10:27.344 "num_base_bdevs": 3, 00:10:27.344 "num_base_bdevs_discovered": 1, 00:10:27.344 "num_base_bdevs_operational": 2, 00:10:27.344 "base_bdevs_list": [ 00:10:27.344 { 00:10:27.344 "name": null, 00:10:27.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.344 "is_configured": false, 00:10:27.344 "data_offset": 2048, 00:10:27.344 "data_size": 63488 00:10:27.344 }, 00:10:27.344 { 00:10:27.344 "name": "pt2", 00:10:27.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.344 "is_configured": true, 00:10:27.344 "data_offset": 2048, 00:10:27.344 "data_size": 63488 00:10:27.344 }, 00:10:27.344 { 00:10:27.344 "name": null, 00:10:27.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.344 "is_configured": false, 00:10:27.344 "data_offset": 2048, 00:10:27.344 "data_size": 63488 00:10:27.344 } 00:10:27.344 ] 00:10:27.344 }' 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.344 09:55:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.604 [2024-10-21 09:55:04.146458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:27.604 [2024-10-21 09:55:04.146611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.604 [2024-10-21 09:55:04.146660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:27.604 [2024-10-21 09:55:04.146700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.604 [2024-10-21 09:55:04.147223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.604 [2024-10-21 09:55:04.147290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:27.604 [2024-10-21 09:55:04.147406] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:27.604 [2024-10-21 09:55:04.147467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:27.604 [2024-10-21 09:55:04.147651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:10:27.604 [2024-10-21 09:55:04.147698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:27.604 [2024-10-21 09:55:04.148004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:27.604 [2024-10-21 09:55:04.148209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:10:27.604 [2024-10-21 09:55:04.148248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:10:27.604 [2024-10-21 09:55:04.148432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.604 pt3 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.604 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.863 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.863 "name": "raid_bdev1", 00:10:27.863 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:27.863 "strip_size_kb": 0, 00:10:27.863 "state": "online", 00:10:27.863 "raid_level": "raid1", 00:10:27.863 "superblock": true, 00:10:27.863 "num_base_bdevs": 3, 00:10:27.863 "num_base_bdevs_discovered": 2, 00:10:27.863 "num_base_bdevs_operational": 2, 00:10:27.863 "base_bdevs_list": [ 00:10:27.863 { 00:10:27.863 "name": null, 00:10:27.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.863 "is_configured": false, 00:10:27.863 "data_offset": 2048, 00:10:27.863 "data_size": 63488 00:10:27.863 }, 00:10:27.863 { 00:10:27.864 "name": "pt2", 00:10:27.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.864 "is_configured": true, 00:10:27.864 "data_offset": 2048, 00:10:27.864 "data_size": 63488 00:10:27.864 }, 00:10:27.864 { 00:10:27.864 "name": "pt3", 00:10:27.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.864 "is_configured": true, 00:10:27.864 "data_offset": 2048, 00:10:27.864 "data_size": 63488 00:10:27.864 } 00:10:27.864 ] 00:10:27.864 }' 00:10:27.864 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.864 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.124 [2024-10-21 09:55:04.601687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.124 [2024-10-21 09:55:04.601721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.124 [2024-10-21 09:55:04.601806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.124 [2024-10-21 09:55:04.601869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.124 [2024-10-21 09:55:04.601879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.124 [2024-10-21 09:55:04.677564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:28.124 [2024-10-21 09:55:04.677630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.124 [2024-10-21 09:55:04.677668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:28.124 [2024-10-21 09:55:04.677677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.124 [2024-10-21 09:55:04.679925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.124 [2024-10-21 09:55:04.679960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:28.124 [2024-10-21 09:55:04.680044] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:28.124 [2024-10-21 09:55:04.680103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:28.124 [2024-10-21 09:55:04.680241] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:28.124 [2024-10-21 09:55:04.680250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.124 [2024-10-21 09:55:04.680268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state configuring 00:10:28.124 [2024-10-21 09:55:04.680321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.124 pt1 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.124 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.384 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.384 "name": "raid_bdev1", 00:10:28.384 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:28.384 "strip_size_kb": 0, 00:10:28.384 "state": "configuring", 00:10:28.384 "raid_level": "raid1", 00:10:28.384 "superblock": true, 00:10:28.384 "num_base_bdevs": 3, 00:10:28.384 "num_base_bdevs_discovered": 1, 00:10:28.384 "num_base_bdevs_operational": 2, 00:10:28.384 "base_bdevs_list": [ 00:10:28.384 { 00:10:28.384 "name": null, 00:10:28.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.384 "is_configured": false, 00:10:28.384 "data_offset": 2048, 00:10:28.384 "data_size": 63488 00:10:28.384 }, 00:10:28.384 { 00:10:28.384 "name": "pt2", 00:10:28.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.384 "is_configured": true, 00:10:28.384 "data_offset": 2048, 00:10:28.384 "data_size": 63488 00:10:28.384 }, 00:10:28.384 { 00:10:28.384 "name": null, 00:10:28.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.384 "is_configured": false, 00:10:28.384 "data_offset": 2048, 00:10:28.384 "data_size": 63488 00:10:28.384 } 00:10:28.384 ] 00:10:28.384 }' 00:10:28.384 09:55:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.384 09:55:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.644 [2024-10-21 09:55:05.160736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:28.644 [2024-10-21 09:55:05.160797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.644 [2024-10-21 09:55:05.160818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:28.644 [2024-10-21 09:55:05.160827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.644 [2024-10-21 09:55:05.161269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.644 [2024-10-21 09:55:05.161291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:28.644 [2024-10-21 09:55:05.161372] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:28.644 [2024-10-21 09:55:05.161417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:28.644 [2024-10-21 09:55:05.161573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:28.644 [2024-10-21 09:55:05.161638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.644 [2024-10-21 09:55:05.161914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:28.644 [2024-10-21 09:55:05.162114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:28.644 [2024-10-21 09:55:05.162167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:28.644 [2024-10-21 09:55:05.162339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.644 pt3 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.644 "name": "raid_bdev1", 00:10:28.644 "uuid": "bd2dfe2c-ce28-44e2-93e0-0f5bb774c658", 00:10:28.644 "strip_size_kb": 0, 00:10:28.644 "state": "online", 00:10:28.644 "raid_level": "raid1", 00:10:28.644 "superblock": true, 00:10:28.644 "num_base_bdevs": 3, 00:10:28.644 "num_base_bdevs_discovered": 2, 00:10:28.644 "num_base_bdevs_operational": 2, 00:10:28.644 "base_bdevs_list": [ 00:10:28.644 { 00:10:28.644 "name": null, 00:10:28.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.644 "is_configured": false, 00:10:28.644 "data_offset": 2048, 00:10:28.644 "data_size": 63488 00:10:28.644 }, 00:10:28.644 { 00:10:28.644 "name": "pt2", 00:10:28.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.644 "is_configured": true, 00:10:28.644 "data_offset": 2048, 00:10:28.644 "data_size": 63488 00:10:28.644 }, 00:10:28.644 { 00:10:28.644 "name": "pt3", 00:10:28.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.644 "is_configured": true, 00:10:28.644 "data_offset": 2048, 00:10:28.644 "data_size": 63488 00:10:28.644 } 00:10:28.644 ] 00:10:28.644 }' 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.644 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:29.216 [2024-10-21 09:55:05.640185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' bd2dfe2c-ce28-44e2-93e0-0f5bb774c658 '!=' bd2dfe2c-ce28-44e2-93e0-0f5bb774c658 ']' 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68220 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68220 ']' 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68220 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68220 00:10:29.216 killing process with pid 68220 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68220' 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68220 00:10:29.216 [2024-10-21 09:55:05.725091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.216 [2024-10-21 09:55:05.725188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.216 [2024-10-21 09:55:05.725251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.216 09:55:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68220 00:10:29.216 [2024-10-21 09:55:05.725263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:29.480 [2024-10-21 09:55:06.039249] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.860 09:55:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:30.860 00:10:30.860 real 0m7.767s 00:10:30.860 user 0m12.134s 00:10:30.860 sys 0m1.376s 00:10:30.860 09:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.860 09:55:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.860 ************************************ 00:10:30.860 END TEST raid_superblock_test 00:10:30.860 ************************************ 00:10:30.860 09:55:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:30.860 09:55:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:30.860 09:55:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.860 09:55:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.860 ************************************ 00:10:30.860 START TEST raid_read_error_test 00:10:30.860 ************************************ 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3tRo8ZuolC 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68665 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68665 00:10:30.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 68665 ']' 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.860 09:55:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:30.860 [2024-10-21 09:55:07.341742] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:10:30.860 [2024-10-21 09:55:07.341868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68665 ] 00:10:31.120 [2024-10-21 09:55:07.503833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.120 [2024-10-21 09:55:07.625294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.380 [2024-10-21 09:55:07.851477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.380 [2024-10-21 09:55:07.851534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.640 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.640 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:31.640 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.640 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:31.640 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.640 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.640 BaseBdev1_malloc 00:10:31.640 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.640 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:31.640 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.640 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.901 true 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.901 [2024-10-21 09:55:08.241288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:31.901 [2024-10-21 09:55:08.241343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.901 [2024-10-21 09:55:08.241362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:31.901 [2024-10-21 09:55:08.241376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.901 [2024-10-21 09:55:08.243540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.901 [2024-10-21 09:55:08.243589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:31.901 BaseBdev1 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.901 BaseBdev2_malloc 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.901 true 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.901 [2024-10-21 09:55:08.309825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:31.901 [2024-10-21 09:55:08.309877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.901 [2024-10-21 09:55:08.309893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:31.901 [2024-10-21 09:55:08.309903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.901 [2024-10-21 09:55:08.311992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.901 [2024-10-21 09:55:08.312028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:31.901 BaseBdev2 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.901 BaseBdev3_malloc 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.901 true 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.901 [2024-10-21 09:55:08.381257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:31.901 [2024-10-21 09:55:08.381313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.901 [2024-10-21 09:55:08.381332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:31.901 [2024-10-21 09:55:08.381343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.901 [2024-10-21 09:55:08.383639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.901 [2024-10-21 09:55:08.383693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:31.901 BaseBdev3 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.901 [2024-10-21 09:55:08.393296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.901 [2024-10-21 09:55:08.395116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.901 [2024-10-21 09:55:08.395203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.901 [2024-10-21 09:55:08.395390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:10:31.901 [2024-10-21 09:55:08.395412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:31.901 [2024-10-21 09:55:08.395661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:31.901 [2024-10-21 09:55:08.395841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:10:31.901 [2024-10-21 09:55:08.395863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:10:31.901 [2024-10-21 09:55:08.395999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.901 "name": "raid_bdev1", 00:10:31.901 "uuid": "edb458cb-0d95-429d-a668-09197e4ee0e5", 00:10:31.901 "strip_size_kb": 0, 00:10:31.901 "state": "online", 00:10:31.901 "raid_level": "raid1", 00:10:31.901 "superblock": true, 00:10:31.901 "num_base_bdevs": 3, 00:10:31.901 "num_base_bdevs_discovered": 3, 00:10:31.901 "num_base_bdevs_operational": 3, 00:10:31.901 "base_bdevs_list": [ 00:10:31.901 { 00:10:31.901 "name": "BaseBdev1", 00:10:31.901 "uuid": "29d5421a-ae7d-58f9-808b-05dca7c70729", 00:10:31.901 "is_configured": true, 00:10:31.901 "data_offset": 2048, 00:10:31.901 "data_size": 63488 00:10:31.901 }, 00:10:31.901 { 00:10:31.901 "name": "BaseBdev2", 00:10:31.901 "uuid": "015cdd30-370d-58d1-a091-85563dc7e5b0", 00:10:31.901 "is_configured": true, 00:10:31.901 "data_offset": 2048, 00:10:31.901 "data_size": 63488 00:10:31.901 }, 00:10:31.901 { 00:10:31.901 "name": "BaseBdev3", 00:10:31.901 "uuid": "69e2387a-2319-58a8-b8c5-91b9eb15bb0e", 00:10:31.901 "is_configured": true, 00:10:31.901 "data_offset": 2048, 00:10:31.901 "data_size": 63488 00:10:31.901 } 00:10:31.901 ] 00:10:31.901 }' 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.901 09:55:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.471 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:32.471 09:55:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:32.471 [2024-10-21 09:55:08.985672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.410 "name": "raid_bdev1", 00:10:33.410 "uuid": "edb458cb-0d95-429d-a668-09197e4ee0e5", 00:10:33.410 "strip_size_kb": 0, 00:10:33.410 "state": "online", 00:10:33.410 "raid_level": "raid1", 00:10:33.410 "superblock": true, 00:10:33.410 "num_base_bdevs": 3, 00:10:33.410 "num_base_bdevs_discovered": 3, 00:10:33.410 "num_base_bdevs_operational": 3, 00:10:33.410 "base_bdevs_list": [ 00:10:33.410 { 00:10:33.410 "name": "BaseBdev1", 00:10:33.410 "uuid": "29d5421a-ae7d-58f9-808b-05dca7c70729", 00:10:33.410 "is_configured": true, 00:10:33.410 "data_offset": 2048, 00:10:33.410 "data_size": 63488 00:10:33.410 }, 00:10:33.410 { 00:10:33.410 "name": "BaseBdev2", 00:10:33.410 "uuid": "015cdd30-370d-58d1-a091-85563dc7e5b0", 00:10:33.410 "is_configured": true, 00:10:33.410 "data_offset": 2048, 00:10:33.410 "data_size": 63488 00:10:33.410 }, 00:10:33.410 { 00:10:33.410 "name": "BaseBdev3", 00:10:33.410 "uuid": "69e2387a-2319-58a8-b8c5-91b9eb15bb0e", 00:10:33.410 "is_configured": true, 00:10:33.410 "data_offset": 2048, 00:10:33.410 "data_size": 63488 00:10:33.410 } 00:10:33.410 ] 00:10:33.410 }' 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.410 09:55:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.980 [2024-10-21 09:55:10.345553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.980 [2024-10-21 09:55:10.345607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.980 [2024-10-21 09:55:10.348687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.980 [2024-10-21 09:55:10.348742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.980 [2024-10-21 09:55:10.348854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.980 [2024-10-21 09:55:10.348875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:10:33.980 { 00:10:33.980 "results": [ 00:10:33.980 { 00:10:33.980 "job": "raid_bdev1", 00:10:33.980 "core_mask": "0x1", 00:10:33.980 "workload": "randrw", 00:10:33.980 "percentage": 50, 00:10:33.980 "status": "finished", 00:10:33.980 "queue_depth": 1, 00:10:33.980 "io_size": 131072, 00:10:33.980 "runtime": 1.360566, 00:10:33.980 "iops": 13184.218920655081, 00:10:33.980 "mibps": 1648.0273650818851, 00:10:33.980 "io_failed": 0, 00:10:33.980 "io_timeout": 0, 00:10:33.980 "avg_latency_us": 73.18535255594111, 00:10:33.980 "min_latency_us": 23.811353711790392, 00:10:33.980 "max_latency_us": 1480.9991266375546 00:10:33.980 } 00:10:33.980 ], 00:10:33.980 "core_count": 1 00:10:33.980 } 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68665 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 68665 ']' 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 68665 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68665 00:10:33.980 killing process with pid 68665 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68665' 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 68665 00:10:33.980 [2024-10-21 09:55:10.394339] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.980 09:55:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 68665 00:10:34.240 [2024-10-21 09:55:10.630839] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.619 09:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3tRo8ZuolC 00:10:35.619 09:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:35.619 09:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:35.619 09:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:35.619 09:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:35.619 09:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.619 09:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:35.619 09:55:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:35.619 00:10:35.619 real 0m4.586s 00:10:35.619 user 0m5.484s 00:10:35.619 sys 0m0.555s 00:10:35.619 09:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.619 09:55:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 ************************************ 00:10:35.619 END TEST raid_read_error_test 00:10:35.619 ************************************ 00:10:35.619 09:55:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:35.619 09:55:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:35.619 09:55:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.619 09:55:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 ************************************ 00:10:35.619 START TEST raid_write_error_test 00:10:35.619 ************************************ 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jpLxL98ms3 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68811 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68811 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 68811 ']' 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.619 09:55:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.619 [2024-10-21 09:55:12.008798] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:10:35.619 [2024-10-21 09:55:12.008967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68811 ] 00:10:35.619 [2024-10-21 09:55:12.186076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.879 [2024-10-21 09:55:12.310921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.138 [2024-10-21 09:55:12.529556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.139 [2024-10-21 09:55:12.529635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.398 BaseBdev1_malloc 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.398 true 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.398 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.398 [2024-10-21 09:55:12.911514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:36.398 [2024-10-21 09:55:12.911579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.398 [2024-10-21 09:55:12.911599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:36.398 [2024-10-21 09:55:12.911612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.398 [2024-10-21 09:55:12.913749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.399 [2024-10-21 09:55:12.913785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:36.399 BaseBdev1 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 BaseBdev2_malloc 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 true 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 [2024-10-21 09:55:12.981815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:36.399 [2024-10-21 09:55:12.981864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.399 [2024-10-21 09:55:12.981880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:36.399 [2024-10-21 09:55:12.981890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.399 [2024-10-21 09:55:12.983941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.399 [2024-10-21 09:55:12.983977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:36.399 BaseBdev2 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.399 09:55:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.659 BaseBdev3_malloc 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.659 true 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.659 [2024-10-21 09:55:13.064802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:36.659 [2024-10-21 09:55:13.064860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.659 [2024-10-21 09:55:13.064881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:36.659 [2024-10-21 09:55:13.064892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.659 [2024-10-21 09:55:13.067113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.659 [2024-10-21 09:55:13.067154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:36.659 BaseBdev3 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.659 [2024-10-21 09:55:13.076843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.659 [2024-10-21 09:55:13.078694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.659 [2024-10-21 09:55:13.078777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.659 [2024-10-21 09:55:13.078979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:10:36.659 [2024-10-21 09:55:13.079002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:36.659 [2024-10-21 09:55:13.079280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:36.659 [2024-10-21 09:55:13.079471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:10:36.659 [2024-10-21 09:55:13.079495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:10:36.659 [2024-10-21 09:55:13.079692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.659 "name": "raid_bdev1", 00:10:36.659 "uuid": "e07931f1-cb58-413d-bea2-74733e8c9c08", 00:10:36.659 "strip_size_kb": 0, 00:10:36.659 "state": "online", 00:10:36.659 "raid_level": "raid1", 00:10:36.659 "superblock": true, 00:10:36.659 "num_base_bdevs": 3, 00:10:36.659 "num_base_bdevs_discovered": 3, 00:10:36.659 "num_base_bdevs_operational": 3, 00:10:36.659 "base_bdevs_list": [ 00:10:36.659 { 00:10:36.659 "name": "BaseBdev1", 00:10:36.659 "uuid": "a44cb3c4-4374-5bf5-9482-e5281e3838c0", 00:10:36.659 "is_configured": true, 00:10:36.659 "data_offset": 2048, 00:10:36.659 "data_size": 63488 00:10:36.659 }, 00:10:36.659 { 00:10:36.659 "name": "BaseBdev2", 00:10:36.659 "uuid": "96205bd8-81ba-5480-a261-25b80f4ef682", 00:10:36.659 "is_configured": true, 00:10:36.659 "data_offset": 2048, 00:10:36.659 "data_size": 63488 00:10:36.659 }, 00:10:36.659 { 00:10:36.659 "name": "BaseBdev3", 00:10:36.659 "uuid": "0bb0f9eb-792a-5321-b8f0-ad72564425af", 00:10:36.659 "is_configured": true, 00:10:36.659 "data_offset": 2048, 00:10:36.659 "data_size": 63488 00:10:36.659 } 00:10:36.659 ] 00:10:36.659 }' 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.659 09:55:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.238 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:37.238 09:55:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:37.238 [2024-10-21 09:55:13.621579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.191 [2024-10-21 09:55:14.535757] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:38.191 [2024-10-21 09:55:14.535810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.191 [2024-10-21 09:55:14.536029] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.191 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.191 "name": "raid_bdev1", 00:10:38.191 "uuid": "e07931f1-cb58-413d-bea2-74733e8c9c08", 00:10:38.191 "strip_size_kb": 0, 00:10:38.191 "state": "online", 00:10:38.191 "raid_level": "raid1", 00:10:38.191 "superblock": true, 00:10:38.191 "num_base_bdevs": 3, 00:10:38.191 "num_base_bdevs_discovered": 2, 00:10:38.191 "num_base_bdevs_operational": 2, 00:10:38.191 "base_bdevs_list": [ 00:10:38.191 { 00:10:38.191 "name": null, 00:10:38.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.191 "is_configured": false, 00:10:38.191 "data_offset": 0, 00:10:38.191 "data_size": 63488 00:10:38.191 }, 00:10:38.191 { 00:10:38.191 "name": "BaseBdev2", 00:10:38.191 "uuid": "96205bd8-81ba-5480-a261-25b80f4ef682", 00:10:38.191 "is_configured": true, 00:10:38.191 "data_offset": 2048, 00:10:38.191 "data_size": 63488 00:10:38.191 }, 00:10:38.191 { 00:10:38.191 "name": "BaseBdev3", 00:10:38.191 "uuid": "0bb0f9eb-792a-5321-b8f0-ad72564425af", 00:10:38.191 "is_configured": true, 00:10:38.191 "data_offset": 2048, 00:10:38.191 "data_size": 63488 00:10:38.191 } 00:10:38.191 ] 00:10:38.191 }' 00:10:38.192 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.192 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.451 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.451 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.451 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.451 [2024-10-21 09:55:14.990004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.451 [2024-10-21 09:55:14.990042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.451 [2024-10-21 09:55:14.992804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.451 [2024-10-21 09:55:14.992854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.451 [2024-10-21 09:55:14.992928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.451 [2024-10-21 09:55:14.992941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:10:38.451 { 00:10:38.451 "results": [ 00:10:38.451 { 00:10:38.451 "job": "raid_bdev1", 00:10:38.451 "core_mask": "0x1", 00:10:38.451 "workload": "randrw", 00:10:38.451 "percentage": 50, 00:10:38.451 "status": "finished", 00:10:38.451 "queue_depth": 1, 00:10:38.451 "io_size": 131072, 00:10:38.451 "runtime": 1.36914, 00:10:38.451 "iops": 14820.252129073726, 00:10:38.451 "mibps": 1852.5315161342157, 00:10:38.451 "io_failed": 0, 00:10:38.451 "io_timeout": 0, 00:10:38.451 "avg_latency_us": 64.86601330553115, 00:10:38.451 "min_latency_us": 22.805240174672488, 00:10:38.451 "max_latency_us": 1402.2986899563318 00:10:38.451 } 00:10:38.451 ], 00:10:38.451 "core_count": 1 00:10:38.451 } 00:10:38.451 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.451 09:55:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68811 00:10:38.451 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 68811 ']' 00:10:38.451 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 68811 00:10:38.451 09:55:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:38.451 09:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:38.451 09:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68811 00:10:38.451 09:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:38.451 09:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:38.451 killing process with pid 68811 00:10:38.451 09:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68811' 00:10:38.451 09:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 68811 00:10:38.451 [2024-10-21 09:55:15.040063] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.451 09:55:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 68811 00:10:38.711 [2024-10-21 09:55:15.268220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.093 09:55:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jpLxL98ms3 00:10:40.093 09:55:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:40.093 09:55:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:40.093 09:55:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:40.093 09:55:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:40.093 09:55:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.093 09:55:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:40.093 09:55:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:40.093 00:10:40.093 real 0m4.559s 00:10:40.093 user 0m5.425s 00:10:40.093 sys 0m0.577s 00:10:40.093 09:55:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.093 09:55:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.093 ************************************ 00:10:40.093 END TEST raid_write_error_test 00:10:40.093 ************************************ 00:10:40.093 09:55:16 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:40.093 09:55:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:40.093 09:55:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:40.093 09:55:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:40.093 09:55:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.093 09:55:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.093 ************************************ 00:10:40.093 START TEST raid_state_function_test 00:10:40.093 ************************************ 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.093 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=68949 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68949' 00:10:40.094 Process raid pid: 68949 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 68949 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 68949 ']' 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.094 09:55:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.094 [2024-10-21 09:55:16.627392] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:10:40.094 [2024-10-21 09:55:16.627523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.353 [2024-10-21 09:55:16.778430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.353 [2024-10-21 09:55:16.898287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.614 [2024-10-21 09:55:17.119294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.614 [2024-10-21 09:55:17.119335] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.183 [2024-10-21 09:55:17.477653] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.183 [2024-10-21 09:55:17.477706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.183 [2024-10-21 09:55:17.477716] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.183 [2024-10-21 09:55:17.477727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.183 [2024-10-21 09:55:17.477734] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.183 [2024-10-21 09:55:17.477742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.183 [2024-10-21 09:55:17.477748] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.183 [2024-10-21 09:55:17.477757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.183 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.184 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.184 "name": "Existed_Raid", 00:10:41.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.184 "strip_size_kb": 64, 00:10:41.184 "state": "configuring", 00:10:41.184 "raid_level": "raid0", 00:10:41.184 "superblock": false, 00:10:41.184 "num_base_bdevs": 4, 00:10:41.184 "num_base_bdevs_discovered": 0, 00:10:41.184 "num_base_bdevs_operational": 4, 00:10:41.184 "base_bdevs_list": [ 00:10:41.184 { 00:10:41.184 "name": "BaseBdev1", 00:10:41.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.184 "is_configured": false, 00:10:41.184 "data_offset": 0, 00:10:41.184 "data_size": 0 00:10:41.184 }, 00:10:41.184 { 00:10:41.184 "name": "BaseBdev2", 00:10:41.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.184 "is_configured": false, 00:10:41.184 "data_offset": 0, 00:10:41.184 "data_size": 0 00:10:41.184 }, 00:10:41.184 { 00:10:41.184 "name": "BaseBdev3", 00:10:41.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.184 "is_configured": false, 00:10:41.184 "data_offset": 0, 00:10:41.184 "data_size": 0 00:10:41.184 }, 00:10:41.184 { 00:10:41.184 "name": "BaseBdev4", 00:10:41.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.184 "is_configured": false, 00:10:41.184 "data_offset": 0, 00:10:41.184 "data_size": 0 00:10:41.184 } 00:10:41.184 ] 00:10:41.184 }' 00:10:41.184 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.184 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.444 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.444 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.444 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.444 [2024-10-21 09:55:17.992702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.444 [2024-10-21 09:55:17.992745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:10:41.444 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.444 09:55:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.444 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.444 09:55:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.444 [2024-10-21 09:55:18.000696] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.444 [2024-10-21 09:55:18.000731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.444 [2024-10-21 09:55:18.000740] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.444 [2024-10-21 09:55:18.000751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.444 [2024-10-21 09:55:18.000758] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.444 [2024-10-21 09:55:18.000767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.444 [2024-10-21 09:55:18.000774] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.444 [2024-10-21 09:55:18.000783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.444 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.444 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:41.444 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.444 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.702 [2024-10-21 09:55:18.052784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.702 BaseBdev1 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.702 [ 00:10:41.702 { 00:10:41.702 "name": "BaseBdev1", 00:10:41.702 "aliases": [ 00:10:41.702 "960f3bb5-ae35-4b6c-aa06-d67e88803f20" 00:10:41.702 ], 00:10:41.702 "product_name": "Malloc disk", 00:10:41.702 "block_size": 512, 00:10:41.702 "num_blocks": 65536, 00:10:41.702 "uuid": "960f3bb5-ae35-4b6c-aa06-d67e88803f20", 00:10:41.702 "assigned_rate_limits": { 00:10:41.702 "rw_ios_per_sec": 0, 00:10:41.702 "rw_mbytes_per_sec": 0, 00:10:41.702 "r_mbytes_per_sec": 0, 00:10:41.702 "w_mbytes_per_sec": 0 00:10:41.702 }, 00:10:41.702 "claimed": true, 00:10:41.702 "claim_type": "exclusive_write", 00:10:41.702 "zoned": false, 00:10:41.702 "supported_io_types": { 00:10:41.702 "read": true, 00:10:41.702 "write": true, 00:10:41.702 "unmap": true, 00:10:41.702 "flush": true, 00:10:41.702 "reset": true, 00:10:41.702 "nvme_admin": false, 00:10:41.702 "nvme_io": false, 00:10:41.702 "nvme_io_md": false, 00:10:41.702 "write_zeroes": true, 00:10:41.702 "zcopy": true, 00:10:41.702 "get_zone_info": false, 00:10:41.702 "zone_management": false, 00:10:41.702 "zone_append": false, 00:10:41.702 "compare": false, 00:10:41.702 "compare_and_write": false, 00:10:41.702 "abort": true, 00:10:41.702 "seek_hole": false, 00:10:41.702 "seek_data": false, 00:10:41.702 "copy": true, 00:10:41.702 "nvme_iov_md": false 00:10:41.702 }, 00:10:41.702 "memory_domains": [ 00:10:41.702 { 00:10:41.702 "dma_device_id": "system", 00:10:41.702 "dma_device_type": 1 00:10:41.702 }, 00:10:41.702 { 00:10:41.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.702 "dma_device_type": 2 00:10:41.702 } 00:10:41.702 ], 00:10:41.702 "driver_specific": {} 00:10:41.702 } 00:10:41.702 ] 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.702 "name": "Existed_Raid", 00:10:41.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.702 "strip_size_kb": 64, 00:10:41.702 "state": "configuring", 00:10:41.702 "raid_level": "raid0", 00:10:41.702 "superblock": false, 00:10:41.702 "num_base_bdevs": 4, 00:10:41.702 "num_base_bdevs_discovered": 1, 00:10:41.702 "num_base_bdevs_operational": 4, 00:10:41.702 "base_bdevs_list": [ 00:10:41.702 { 00:10:41.702 "name": "BaseBdev1", 00:10:41.702 "uuid": "960f3bb5-ae35-4b6c-aa06-d67e88803f20", 00:10:41.702 "is_configured": true, 00:10:41.702 "data_offset": 0, 00:10:41.702 "data_size": 65536 00:10:41.702 }, 00:10:41.702 { 00:10:41.702 "name": "BaseBdev2", 00:10:41.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.702 "is_configured": false, 00:10:41.702 "data_offset": 0, 00:10:41.702 "data_size": 0 00:10:41.702 }, 00:10:41.702 { 00:10:41.702 "name": "BaseBdev3", 00:10:41.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.702 "is_configured": false, 00:10:41.702 "data_offset": 0, 00:10:41.702 "data_size": 0 00:10:41.702 }, 00:10:41.702 { 00:10:41.702 "name": "BaseBdev4", 00:10:41.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.702 "is_configured": false, 00:10:41.702 "data_offset": 0, 00:10:41.702 "data_size": 0 00:10:41.702 } 00:10:41.702 ] 00:10:41.702 }' 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.702 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.960 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.960 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.960 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.960 [2024-10-21 09:55:18.548048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.960 [2024-10-21 09:55:18.548115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:10:41.960 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.960 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.960 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.960 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.218 [2024-10-21 09:55:18.560084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.218 [2024-10-21 09:55:18.562280] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.218 [2024-10-21 09:55:18.562331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.218 [2024-10-21 09:55:18.562343] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:42.218 [2024-10-21 09:55:18.562356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:42.218 [2024-10-21 09:55:18.562364] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:42.218 [2024-10-21 09:55:18.562375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.218 "name": "Existed_Raid", 00:10:42.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.218 "strip_size_kb": 64, 00:10:42.218 "state": "configuring", 00:10:42.218 "raid_level": "raid0", 00:10:42.218 "superblock": false, 00:10:42.218 "num_base_bdevs": 4, 00:10:42.218 "num_base_bdevs_discovered": 1, 00:10:42.218 "num_base_bdevs_operational": 4, 00:10:42.218 "base_bdevs_list": [ 00:10:42.218 { 00:10:42.218 "name": "BaseBdev1", 00:10:42.218 "uuid": "960f3bb5-ae35-4b6c-aa06-d67e88803f20", 00:10:42.218 "is_configured": true, 00:10:42.218 "data_offset": 0, 00:10:42.218 "data_size": 65536 00:10:42.218 }, 00:10:42.218 { 00:10:42.218 "name": "BaseBdev2", 00:10:42.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.218 "is_configured": false, 00:10:42.218 "data_offset": 0, 00:10:42.218 "data_size": 0 00:10:42.218 }, 00:10:42.218 { 00:10:42.218 "name": "BaseBdev3", 00:10:42.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.218 "is_configured": false, 00:10:42.218 "data_offset": 0, 00:10:42.218 "data_size": 0 00:10:42.218 }, 00:10:42.218 { 00:10:42.218 "name": "BaseBdev4", 00:10:42.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.218 "is_configured": false, 00:10:42.218 "data_offset": 0, 00:10:42.218 "data_size": 0 00:10:42.218 } 00:10:42.218 ] 00:10:42.218 }' 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.218 09:55:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.475 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.475 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.475 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.732 [2024-10-21 09:55:19.084812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.733 BaseBdev2 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.733 [ 00:10:42.733 { 00:10:42.733 "name": "BaseBdev2", 00:10:42.733 "aliases": [ 00:10:42.733 "f8162bd5-5a8f-4d52-bedf-1b578455139d" 00:10:42.733 ], 00:10:42.733 "product_name": "Malloc disk", 00:10:42.733 "block_size": 512, 00:10:42.733 "num_blocks": 65536, 00:10:42.733 "uuid": "f8162bd5-5a8f-4d52-bedf-1b578455139d", 00:10:42.733 "assigned_rate_limits": { 00:10:42.733 "rw_ios_per_sec": 0, 00:10:42.733 "rw_mbytes_per_sec": 0, 00:10:42.733 "r_mbytes_per_sec": 0, 00:10:42.733 "w_mbytes_per_sec": 0 00:10:42.733 }, 00:10:42.733 "claimed": true, 00:10:42.733 "claim_type": "exclusive_write", 00:10:42.733 "zoned": false, 00:10:42.733 "supported_io_types": { 00:10:42.733 "read": true, 00:10:42.733 "write": true, 00:10:42.733 "unmap": true, 00:10:42.733 "flush": true, 00:10:42.733 "reset": true, 00:10:42.733 "nvme_admin": false, 00:10:42.733 "nvme_io": false, 00:10:42.733 "nvme_io_md": false, 00:10:42.733 "write_zeroes": true, 00:10:42.733 "zcopy": true, 00:10:42.733 "get_zone_info": false, 00:10:42.733 "zone_management": false, 00:10:42.733 "zone_append": false, 00:10:42.733 "compare": false, 00:10:42.733 "compare_and_write": false, 00:10:42.733 "abort": true, 00:10:42.733 "seek_hole": false, 00:10:42.733 "seek_data": false, 00:10:42.733 "copy": true, 00:10:42.733 "nvme_iov_md": false 00:10:42.733 }, 00:10:42.733 "memory_domains": [ 00:10:42.733 { 00:10:42.733 "dma_device_id": "system", 00:10:42.733 "dma_device_type": 1 00:10:42.733 }, 00:10:42.733 { 00:10:42.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.733 "dma_device_type": 2 00:10:42.733 } 00:10:42.733 ], 00:10:42.733 "driver_specific": {} 00:10:42.733 } 00:10:42.733 ] 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.733 "name": "Existed_Raid", 00:10:42.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.733 "strip_size_kb": 64, 00:10:42.733 "state": "configuring", 00:10:42.733 "raid_level": "raid0", 00:10:42.733 "superblock": false, 00:10:42.733 "num_base_bdevs": 4, 00:10:42.733 "num_base_bdevs_discovered": 2, 00:10:42.733 "num_base_bdevs_operational": 4, 00:10:42.733 "base_bdevs_list": [ 00:10:42.733 { 00:10:42.733 "name": "BaseBdev1", 00:10:42.733 "uuid": "960f3bb5-ae35-4b6c-aa06-d67e88803f20", 00:10:42.733 "is_configured": true, 00:10:42.733 "data_offset": 0, 00:10:42.733 "data_size": 65536 00:10:42.733 }, 00:10:42.733 { 00:10:42.733 "name": "BaseBdev2", 00:10:42.733 "uuid": "f8162bd5-5a8f-4d52-bedf-1b578455139d", 00:10:42.733 "is_configured": true, 00:10:42.733 "data_offset": 0, 00:10:42.733 "data_size": 65536 00:10:42.733 }, 00:10:42.733 { 00:10:42.733 "name": "BaseBdev3", 00:10:42.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.733 "is_configured": false, 00:10:42.733 "data_offset": 0, 00:10:42.733 "data_size": 0 00:10:42.733 }, 00:10:42.733 { 00:10:42.733 "name": "BaseBdev4", 00:10:42.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.733 "is_configured": false, 00:10:42.733 "data_offset": 0, 00:10:42.733 "data_size": 0 00:10:42.733 } 00:10:42.733 ] 00:10:42.733 }' 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.733 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.299 [2024-10-21 09:55:19.646784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.299 BaseBdev3 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.299 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.299 [ 00:10:43.299 { 00:10:43.299 "name": "BaseBdev3", 00:10:43.299 "aliases": [ 00:10:43.299 "84df0d28-c456-49a0-9e23-0c1ee0b63d33" 00:10:43.299 ], 00:10:43.299 "product_name": "Malloc disk", 00:10:43.299 "block_size": 512, 00:10:43.299 "num_blocks": 65536, 00:10:43.299 "uuid": "84df0d28-c456-49a0-9e23-0c1ee0b63d33", 00:10:43.299 "assigned_rate_limits": { 00:10:43.299 "rw_ios_per_sec": 0, 00:10:43.299 "rw_mbytes_per_sec": 0, 00:10:43.299 "r_mbytes_per_sec": 0, 00:10:43.299 "w_mbytes_per_sec": 0 00:10:43.299 }, 00:10:43.299 "claimed": true, 00:10:43.299 "claim_type": "exclusive_write", 00:10:43.299 "zoned": false, 00:10:43.299 "supported_io_types": { 00:10:43.299 "read": true, 00:10:43.299 "write": true, 00:10:43.299 "unmap": true, 00:10:43.299 "flush": true, 00:10:43.299 "reset": true, 00:10:43.299 "nvme_admin": false, 00:10:43.299 "nvme_io": false, 00:10:43.299 "nvme_io_md": false, 00:10:43.299 "write_zeroes": true, 00:10:43.299 "zcopy": true, 00:10:43.299 "get_zone_info": false, 00:10:43.299 "zone_management": false, 00:10:43.299 "zone_append": false, 00:10:43.299 "compare": false, 00:10:43.299 "compare_and_write": false, 00:10:43.299 "abort": true, 00:10:43.299 "seek_hole": false, 00:10:43.299 "seek_data": false, 00:10:43.299 "copy": true, 00:10:43.299 "nvme_iov_md": false 00:10:43.299 }, 00:10:43.299 "memory_domains": [ 00:10:43.299 { 00:10:43.299 "dma_device_id": "system", 00:10:43.299 "dma_device_type": 1 00:10:43.299 }, 00:10:43.299 { 00:10:43.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.299 "dma_device_type": 2 00:10:43.299 } 00:10:43.299 ], 00:10:43.300 "driver_specific": {} 00:10:43.300 } 00:10:43.300 ] 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.300 "name": "Existed_Raid", 00:10:43.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.300 "strip_size_kb": 64, 00:10:43.300 "state": "configuring", 00:10:43.300 "raid_level": "raid0", 00:10:43.300 "superblock": false, 00:10:43.300 "num_base_bdevs": 4, 00:10:43.300 "num_base_bdevs_discovered": 3, 00:10:43.300 "num_base_bdevs_operational": 4, 00:10:43.300 "base_bdevs_list": [ 00:10:43.300 { 00:10:43.300 "name": "BaseBdev1", 00:10:43.300 "uuid": "960f3bb5-ae35-4b6c-aa06-d67e88803f20", 00:10:43.300 "is_configured": true, 00:10:43.300 "data_offset": 0, 00:10:43.300 "data_size": 65536 00:10:43.300 }, 00:10:43.300 { 00:10:43.300 "name": "BaseBdev2", 00:10:43.300 "uuid": "f8162bd5-5a8f-4d52-bedf-1b578455139d", 00:10:43.300 "is_configured": true, 00:10:43.300 "data_offset": 0, 00:10:43.300 "data_size": 65536 00:10:43.300 }, 00:10:43.300 { 00:10:43.300 "name": "BaseBdev3", 00:10:43.300 "uuid": "84df0d28-c456-49a0-9e23-0c1ee0b63d33", 00:10:43.300 "is_configured": true, 00:10:43.300 "data_offset": 0, 00:10:43.300 "data_size": 65536 00:10:43.300 }, 00:10:43.300 { 00:10:43.300 "name": "BaseBdev4", 00:10:43.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.300 "is_configured": false, 00:10:43.300 "data_offset": 0, 00:10:43.300 "data_size": 0 00:10:43.300 } 00:10:43.300 ] 00:10:43.300 }' 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.300 09:55:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.557 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:43.557 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.557 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.816 [2024-10-21 09:55:20.167116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:43.816 [2024-10-21 09:55:20.167170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:43.816 [2024-10-21 09:55:20.167180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:43.816 [2024-10-21 09:55:20.167525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:43.816 [2024-10-21 09:55:20.167753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:43.816 [2024-10-21 09:55:20.167770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:10:43.816 [2024-10-21 09:55:20.168090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.816 BaseBdev4 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.816 [ 00:10:43.816 { 00:10:43.816 "name": "BaseBdev4", 00:10:43.816 "aliases": [ 00:10:43.816 "729f6cd5-56cc-47be-96ba-ba6544fa313f" 00:10:43.816 ], 00:10:43.816 "product_name": "Malloc disk", 00:10:43.816 "block_size": 512, 00:10:43.816 "num_blocks": 65536, 00:10:43.816 "uuid": "729f6cd5-56cc-47be-96ba-ba6544fa313f", 00:10:43.816 "assigned_rate_limits": { 00:10:43.816 "rw_ios_per_sec": 0, 00:10:43.816 "rw_mbytes_per_sec": 0, 00:10:43.816 "r_mbytes_per_sec": 0, 00:10:43.816 "w_mbytes_per_sec": 0 00:10:43.816 }, 00:10:43.816 "claimed": true, 00:10:43.816 "claim_type": "exclusive_write", 00:10:43.816 "zoned": false, 00:10:43.816 "supported_io_types": { 00:10:43.816 "read": true, 00:10:43.816 "write": true, 00:10:43.816 "unmap": true, 00:10:43.816 "flush": true, 00:10:43.816 "reset": true, 00:10:43.816 "nvme_admin": false, 00:10:43.816 "nvme_io": false, 00:10:43.816 "nvme_io_md": false, 00:10:43.816 "write_zeroes": true, 00:10:43.816 "zcopy": true, 00:10:43.816 "get_zone_info": false, 00:10:43.816 "zone_management": false, 00:10:43.816 "zone_append": false, 00:10:43.816 "compare": false, 00:10:43.816 "compare_and_write": false, 00:10:43.816 "abort": true, 00:10:43.816 "seek_hole": false, 00:10:43.816 "seek_data": false, 00:10:43.816 "copy": true, 00:10:43.816 "nvme_iov_md": false 00:10:43.816 }, 00:10:43.816 "memory_domains": [ 00:10:43.816 { 00:10:43.816 "dma_device_id": "system", 00:10:43.816 "dma_device_type": 1 00:10:43.816 }, 00:10:43.816 { 00:10:43.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.816 "dma_device_type": 2 00:10:43.816 } 00:10:43.816 ], 00:10:43.816 "driver_specific": {} 00:10:43.816 } 00:10:43.816 ] 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.816 "name": "Existed_Raid", 00:10:43.816 "uuid": "7efd2d19-a52f-4c25-b216-69a4074e1c1f", 00:10:43.816 "strip_size_kb": 64, 00:10:43.816 "state": "online", 00:10:43.816 "raid_level": "raid0", 00:10:43.816 "superblock": false, 00:10:43.816 "num_base_bdevs": 4, 00:10:43.816 "num_base_bdevs_discovered": 4, 00:10:43.816 "num_base_bdevs_operational": 4, 00:10:43.816 "base_bdevs_list": [ 00:10:43.816 { 00:10:43.816 "name": "BaseBdev1", 00:10:43.816 "uuid": "960f3bb5-ae35-4b6c-aa06-d67e88803f20", 00:10:43.816 "is_configured": true, 00:10:43.816 "data_offset": 0, 00:10:43.816 "data_size": 65536 00:10:43.816 }, 00:10:43.816 { 00:10:43.816 "name": "BaseBdev2", 00:10:43.816 "uuid": "f8162bd5-5a8f-4d52-bedf-1b578455139d", 00:10:43.816 "is_configured": true, 00:10:43.816 "data_offset": 0, 00:10:43.816 "data_size": 65536 00:10:43.816 }, 00:10:43.816 { 00:10:43.816 "name": "BaseBdev3", 00:10:43.816 "uuid": "84df0d28-c456-49a0-9e23-0c1ee0b63d33", 00:10:43.816 "is_configured": true, 00:10:43.816 "data_offset": 0, 00:10:43.816 "data_size": 65536 00:10:43.816 }, 00:10:43.816 { 00:10:43.816 "name": "BaseBdev4", 00:10:43.816 "uuid": "729f6cd5-56cc-47be-96ba-ba6544fa313f", 00:10:43.816 "is_configured": true, 00:10:43.816 "data_offset": 0, 00:10:43.816 "data_size": 65536 00:10:43.816 } 00:10:43.816 ] 00:10:43.816 }' 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.816 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.383 [2024-10-21 09:55:20.702763] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.383 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.383 "name": "Existed_Raid", 00:10:44.383 "aliases": [ 00:10:44.383 "7efd2d19-a52f-4c25-b216-69a4074e1c1f" 00:10:44.383 ], 00:10:44.383 "product_name": "Raid Volume", 00:10:44.383 "block_size": 512, 00:10:44.383 "num_blocks": 262144, 00:10:44.383 "uuid": "7efd2d19-a52f-4c25-b216-69a4074e1c1f", 00:10:44.383 "assigned_rate_limits": { 00:10:44.383 "rw_ios_per_sec": 0, 00:10:44.383 "rw_mbytes_per_sec": 0, 00:10:44.383 "r_mbytes_per_sec": 0, 00:10:44.383 "w_mbytes_per_sec": 0 00:10:44.383 }, 00:10:44.383 "claimed": false, 00:10:44.383 "zoned": false, 00:10:44.383 "supported_io_types": { 00:10:44.383 "read": true, 00:10:44.383 "write": true, 00:10:44.383 "unmap": true, 00:10:44.383 "flush": true, 00:10:44.383 "reset": true, 00:10:44.383 "nvme_admin": false, 00:10:44.383 "nvme_io": false, 00:10:44.383 "nvme_io_md": false, 00:10:44.383 "write_zeroes": true, 00:10:44.383 "zcopy": false, 00:10:44.383 "get_zone_info": false, 00:10:44.383 "zone_management": false, 00:10:44.383 "zone_append": false, 00:10:44.383 "compare": false, 00:10:44.383 "compare_and_write": false, 00:10:44.383 "abort": false, 00:10:44.383 "seek_hole": false, 00:10:44.383 "seek_data": false, 00:10:44.383 "copy": false, 00:10:44.383 "nvme_iov_md": false 00:10:44.383 }, 00:10:44.383 "memory_domains": [ 00:10:44.383 { 00:10:44.383 "dma_device_id": "system", 00:10:44.383 "dma_device_type": 1 00:10:44.383 }, 00:10:44.383 { 00:10:44.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.383 "dma_device_type": 2 00:10:44.383 }, 00:10:44.383 { 00:10:44.383 "dma_device_id": "system", 00:10:44.383 "dma_device_type": 1 00:10:44.383 }, 00:10:44.383 { 00:10:44.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.383 "dma_device_type": 2 00:10:44.383 }, 00:10:44.383 { 00:10:44.383 "dma_device_id": "system", 00:10:44.383 "dma_device_type": 1 00:10:44.383 }, 00:10:44.383 { 00:10:44.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.383 "dma_device_type": 2 00:10:44.383 }, 00:10:44.383 { 00:10:44.383 "dma_device_id": "system", 00:10:44.383 "dma_device_type": 1 00:10:44.383 }, 00:10:44.384 { 00:10:44.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.384 "dma_device_type": 2 00:10:44.384 } 00:10:44.384 ], 00:10:44.384 "driver_specific": { 00:10:44.384 "raid": { 00:10:44.384 "uuid": "7efd2d19-a52f-4c25-b216-69a4074e1c1f", 00:10:44.384 "strip_size_kb": 64, 00:10:44.384 "state": "online", 00:10:44.384 "raid_level": "raid0", 00:10:44.384 "superblock": false, 00:10:44.384 "num_base_bdevs": 4, 00:10:44.384 "num_base_bdevs_discovered": 4, 00:10:44.384 "num_base_bdevs_operational": 4, 00:10:44.384 "base_bdevs_list": [ 00:10:44.384 { 00:10:44.384 "name": "BaseBdev1", 00:10:44.384 "uuid": "960f3bb5-ae35-4b6c-aa06-d67e88803f20", 00:10:44.384 "is_configured": true, 00:10:44.384 "data_offset": 0, 00:10:44.384 "data_size": 65536 00:10:44.384 }, 00:10:44.384 { 00:10:44.384 "name": "BaseBdev2", 00:10:44.384 "uuid": "f8162bd5-5a8f-4d52-bedf-1b578455139d", 00:10:44.384 "is_configured": true, 00:10:44.384 "data_offset": 0, 00:10:44.384 "data_size": 65536 00:10:44.384 }, 00:10:44.384 { 00:10:44.384 "name": "BaseBdev3", 00:10:44.384 "uuid": "84df0d28-c456-49a0-9e23-0c1ee0b63d33", 00:10:44.384 "is_configured": true, 00:10:44.384 "data_offset": 0, 00:10:44.384 "data_size": 65536 00:10:44.384 }, 00:10:44.384 { 00:10:44.384 "name": "BaseBdev4", 00:10:44.384 "uuid": "729f6cd5-56cc-47be-96ba-ba6544fa313f", 00:10:44.384 "is_configured": true, 00:10:44.384 "data_offset": 0, 00:10:44.384 "data_size": 65536 00:10:44.384 } 00:10:44.384 ] 00:10:44.384 } 00:10:44.384 } 00:10:44.384 }' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:44.384 BaseBdev2 00:10:44.384 BaseBdev3 00:10:44.384 BaseBdev4' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.384 09:55:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.643 09:55:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.643 [2024-10-21 09:55:21.029855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.643 [2024-10-21 09:55:21.029895] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.643 [2024-10-21 09:55:21.029954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.643 "name": "Existed_Raid", 00:10:44.643 "uuid": "7efd2d19-a52f-4c25-b216-69a4074e1c1f", 00:10:44.643 "strip_size_kb": 64, 00:10:44.643 "state": "offline", 00:10:44.643 "raid_level": "raid0", 00:10:44.643 "superblock": false, 00:10:44.643 "num_base_bdevs": 4, 00:10:44.643 "num_base_bdevs_discovered": 3, 00:10:44.643 "num_base_bdevs_operational": 3, 00:10:44.643 "base_bdevs_list": [ 00:10:44.643 { 00:10:44.643 "name": null, 00:10:44.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.643 "is_configured": false, 00:10:44.643 "data_offset": 0, 00:10:44.643 "data_size": 65536 00:10:44.643 }, 00:10:44.643 { 00:10:44.643 "name": "BaseBdev2", 00:10:44.643 "uuid": "f8162bd5-5a8f-4d52-bedf-1b578455139d", 00:10:44.643 "is_configured": true, 00:10:44.643 "data_offset": 0, 00:10:44.643 "data_size": 65536 00:10:44.643 }, 00:10:44.643 { 00:10:44.643 "name": "BaseBdev3", 00:10:44.643 "uuid": "84df0d28-c456-49a0-9e23-0c1ee0b63d33", 00:10:44.643 "is_configured": true, 00:10:44.643 "data_offset": 0, 00:10:44.643 "data_size": 65536 00:10:44.643 }, 00:10:44.643 { 00:10:44.643 "name": "BaseBdev4", 00:10:44.643 "uuid": "729f6cd5-56cc-47be-96ba-ba6544fa313f", 00:10:44.643 "is_configured": true, 00:10:44.643 "data_offset": 0, 00:10:44.643 "data_size": 65536 00:10:44.643 } 00:10:44.643 ] 00:10:44.643 }' 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.643 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.211 [2024-10-21 09:55:21.660062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.211 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.470 [2024-10-21 09:55:21.829859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.470 09:55:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.470 [2024-10-21 09:55:21.998091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:45.470 [2024-10-21 09:55:21.998211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.730 BaseBdev2 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.730 [ 00:10:45.730 { 00:10:45.730 "name": "BaseBdev2", 00:10:45.730 "aliases": [ 00:10:45.730 "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea" 00:10:45.730 ], 00:10:45.730 "product_name": "Malloc disk", 00:10:45.730 "block_size": 512, 00:10:45.730 "num_blocks": 65536, 00:10:45.730 "uuid": "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea", 00:10:45.730 "assigned_rate_limits": { 00:10:45.730 "rw_ios_per_sec": 0, 00:10:45.730 "rw_mbytes_per_sec": 0, 00:10:45.730 "r_mbytes_per_sec": 0, 00:10:45.730 "w_mbytes_per_sec": 0 00:10:45.730 }, 00:10:45.730 "claimed": false, 00:10:45.730 "zoned": false, 00:10:45.730 "supported_io_types": { 00:10:45.730 "read": true, 00:10:45.730 "write": true, 00:10:45.730 "unmap": true, 00:10:45.730 "flush": true, 00:10:45.730 "reset": true, 00:10:45.730 "nvme_admin": false, 00:10:45.730 "nvme_io": false, 00:10:45.730 "nvme_io_md": false, 00:10:45.730 "write_zeroes": true, 00:10:45.730 "zcopy": true, 00:10:45.730 "get_zone_info": false, 00:10:45.730 "zone_management": false, 00:10:45.730 "zone_append": false, 00:10:45.730 "compare": false, 00:10:45.730 "compare_and_write": false, 00:10:45.730 "abort": true, 00:10:45.730 "seek_hole": false, 00:10:45.730 "seek_data": false, 00:10:45.730 "copy": true, 00:10:45.730 "nvme_iov_md": false 00:10:45.730 }, 00:10:45.730 "memory_domains": [ 00:10:45.730 { 00:10:45.730 "dma_device_id": "system", 00:10:45.730 "dma_device_type": 1 00:10:45.730 }, 00:10:45.730 { 00:10:45.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.730 "dma_device_type": 2 00:10:45.730 } 00:10:45.730 ], 00:10:45.730 "driver_specific": {} 00:10:45.730 } 00:10:45.730 ] 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.730 BaseBdev3 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.730 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.730 [ 00:10:45.730 { 00:10:45.730 "name": "BaseBdev3", 00:10:45.730 "aliases": [ 00:10:45.730 "e910e5e1-4d9b-48dd-8a1d-af842c90faf5" 00:10:45.730 ], 00:10:45.730 "product_name": "Malloc disk", 00:10:45.730 "block_size": 512, 00:10:45.730 "num_blocks": 65536, 00:10:45.731 "uuid": "e910e5e1-4d9b-48dd-8a1d-af842c90faf5", 00:10:45.731 "assigned_rate_limits": { 00:10:45.731 "rw_ios_per_sec": 0, 00:10:45.731 "rw_mbytes_per_sec": 0, 00:10:45.731 "r_mbytes_per_sec": 0, 00:10:45.731 "w_mbytes_per_sec": 0 00:10:45.731 }, 00:10:45.731 "claimed": false, 00:10:45.731 "zoned": false, 00:10:45.731 "supported_io_types": { 00:10:45.731 "read": true, 00:10:45.731 "write": true, 00:10:45.731 "unmap": true, 00:10:45.731 "flush": true, 00:10:45.731 "reset": true, 00:10:45.990 "nvme_admin": false, 00:10:45.990 "nvme_io": false, 00:10:45.990 "nvme_io_md": false, 00:10:45.990 "write_zeroes": true, 00:10:45.990 "zcopy": true, 00:10:45.990 "get_zone_info": false, 00:10:45.990 "zone_management": false, 00:10:45.990 "zone_append": false, 00:10:45.990 "compare": false, 00:10:45.990 "compare_and_write": false, 00:10:45.990 "abort": true, 00:10:45.990 "seek_hole": false, 00:10:45.990 "seek_data": false, 00:10:45.990 "copy": true, 00:10:45.990 "nvme_iov_md": false 00:10:45.990 }, 00:10:45.990 "memory_domains": [ 00:10:45.990 { 00:10:45.990 "dma_device_id": "system", 00:10:45.990 "dma_device_type": 1 00:10:45.990 }, 00:10:45.990 { 00:10:45.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.990 "dma_device_type": 2 00:10:45.990 } 00:10:45.990 ], 00:10:45.990 "driver_specific": {} 00:10:45.990 } 00:10:45.990 ] 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.990 BaseBdev4 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.990 [ 00:10:45.990 { 00:10:45.990 "name": "BaseBdev4", 00:10:45.990 "aliases": [ 00:10:45.990 "fc153128-7c69-4955-92c8-ba615b83ddb7" 00:10:45.990 ], 00:10:45.990 "product_name": "Malloc disk", 00:10:45.990 "block_size": 512, 00:10:45.990 "num_blocks": 65536, 00:10:45.990 "uuid": "fc153128-7c69-4955-92c8-ba615b83ddb7", 00:10:45.990 "assigned_rate_limits": { 00:10:45.990 "rw_ios_per_sec": 0, 00:10:45.990 "rw_mbytes_per_sec": 0, 00:10:45.990 "r_mbytes_per_sec": 0, 00:10:45.990 "w_mbytes_per_sec": 0 00:10:45.990 }, 00:10:45.990 "claimed": false, 00:10:45.990 "zoned": false, 00:10:45.990 "supported_io_types": { 00:10:45.990 "read": true, 00:10:45.990 "write": true, 00:10:45.990 "unmap": true, 00:10:45.990 "flush": true, 00:10:45.990 "reset": true, 00:10:45.990 "nvme_admin": false, 00:10:45.990 "nvme_io": false, 00:10:45.990 "nvme_io_md": false, 00:10:45.990 "write_zeroes": true, 00:10:45.990 "zcopy": true, 00:10:45.990 "get_zone_info": false, 00:10:45.990 "zone_management": false, 00:10:45.990 "zone_append": false, 00:10:45.990 "compare": false, 00:10:45.990 "compare_and_write": false, 00:10:45.990 "abort": true, 00:10:45.990 "seek_hole": false, 00:10:45.990 "seek_data": false, 00:10:45.990 "copy": true, 00:10:45.990 "nvme_iov_md": false 00:10:45.990 }, 00:10:45.990 "memory_domains": [ 00:10:45.990 { 00:10:45.990 "dma_device_id": "system", 00:10:45.990 "dma_device_type": 1 00:10:45.990 }, 00:10:45.990 { 00:10:45.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.990 "dma_device_type": 2 00:10:45.990 } 00:10:45.990 ], 00:10:45.990 "driver_specific": {} 00:10:45.990 } 00:10:45.990 ] 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.990 [2024-10-21 09:55:22.426485] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.990 [2024-10-21 09:55:22.426610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.990 [2024-10-21 09:55:22.426668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.990 [2024-10-21 09:55:22.428790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.990 [2024-10-21 09:55:22.428898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.990 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.991 "name": "Existed_Raid", 00:10:45.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.991 "strip_size_kb": 64, 00:10:45.991 "state": "configuring", 00:10:45.991 "raid_level": "raid0", 00:10:45.991 "superblock": false, 00:10:45.991 "num_base_bdevs": 4, 00:10:45.991 "num_base_bdevs_discovered": 3, 00:10:45.991 "num_base_bdevs_operational": 4, 00:10:45.991 "base_bdevs_list": [ 00:10:45.991 { 00:10:45.991 "name": "BaseBdev1", 00:10:45.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.991 "is_configured": false, 00:10:45.991 "data_offset": 0, 00:10:45.991 "data_size": 0 00:10:45.991 }, 00:10:45.991 { 00:10:45.991 "name": "BaseBdev2", 00:10:45.991 "uuid": "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea", 00:10:45.991 "is_configured": true, 00:10:45.991 "data_offset": 0, 00:10:45.991 "data_size": 65536 00:10:45.991 }, 00:10:45.991 { 00:10:45.991 "name": "BaseBdev3", 00:10:45.991 "uuid": "e910e5e1-4d9b-48dd-8a1d-af842c90faf5", 00:10:45.991 "is_configured": true, 00:10:45.991 "data_offset": 0, 00:10:45.991 "data_size": 65536 00:10:45.991 }, 00:10:45.991 { 00:10:45.991 "name": "BaseBdev4", 00:10:45.991 "uuid": "fc153128-7c69-4955-92c8-ba615b83ddb7", 00:10:45.991 "is_configured": true, 00:10:45.991 "data_offset": 0, 00:10:45.991 "data_size": 65536 00:10:45.991 } 00:10:45.991 ] 00:10:45.991 }' 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.991 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.558 [2024-10-21 09:55:22.905702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.558 "name": "Existed_Raid", 00:10:46.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.558 "strip_size_kb": 64, 00:10:46.558 "state": "configuring", 00:10:46.558 "raid_level": "raid0", 00:10:46.558 "superblock": false, 00:10:46.558 "num_base_bdevs": 4, 00:10:46.558 "num_base_bdevs_discovered": 2, 00:10:46.558 "num_base_bdevs_operational": 4, 00:10:46.558 "base_bdevs_list": [ 00:10:46.558 { 00:10:46.558 "name": "BaseBdev1", 00:10:46.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.558 "is_configured": false, 00:10:46.558 "data_offset": 0, 00:10:46.558 "data_size": 0 00:10:46.558 }, 00:10:46.558 { 00:10:46.558 "name": null, 00:10:46.558 "uuid": "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea", 00:10:46.558 "is_configured": false, 00:10:46.558 "data_offset": 0, 00:10:46.558 "data_size": 65536 00:10:46.558 }, 00:10:46.558 { 00:10:46.558 "name": "BaseBdev3", 00:10:46.558 "uuid": "e910e5e1-4d9b-48dd-8a1d-af842c90faf5", 00:10:46.558 "is_configured": true, 00:10:46.558 "data_offset": 0, 00:10:46.558 "data_size": 65536 00:10:46.558 }, 00:10:46.558 { 00:10:46.558 "name": "BaseBdev4", 00:10:46.558 "uuid": "fc153128-7c69-4955-92c8-ba615b83ddb7", 00:10:46.558 "is_configured": true, 00:10:46.558 "data_offset": 0, 00:10:46.558 "data_size": 65536 00:10:46.558 } 00:10:46.558 ] 00:10:46.558 }' 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.558 09:55:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.816 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:46.816 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.816 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.816 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.816 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.816 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:46.816 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.816 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.816 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.077 [2024-10-21 09:55:23.448805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.077 BaseBdev1 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.077 [ 00:10:47.077 { 00:10:47.077 "name": "BaseBdev1", 00:10:47.077 "aliases": [ 00:10:47.077 "85b75b86-4958-44a1-a015-58c7deb6e0da" 00:10:47.077 ], 00:10:47.077 "product_name": "Malloc disk", 00:10:47.077 "block_size": 512, 00:10:47.077 "num_blocks": 65536, 00:10:47.077 "uuid": "85b75b86-4958-44a1-a015-58c7deb6e0da", 00:10:47.077 "assigned_rate_limits": { 00:10:47.077 "rw_ios_per_sec": 0, 00:10:47.077 "rw_mbytes_per_sec": 0, 00:10:47.077 "r_mbytes_per_sec": 0, 00:10:47.077 "w_mbytes_per_sec": 0 00:10:47.077 }, 00:10:47.077 "claimed": true, 00:10:47.077 "claim_type": "exclusive_write", 00:10:47.077 "zoned": false, 00:10:47.077 "supported_io_types": { 00:10:47.077 "read": true, 00:10:47.077 "write": true, 00:10:47.077 "unmap": true, 00:10:47.077 "flush": true, 00:10:47.077 "reset": true, 00:10:47.077 "nvme_admin": false, 00:10:47.077 "nvme_io": false, 00:10:47.077 "nvme_io_md": false, 00:10:47.077 "write_zeroes": true, 00:10:47.077 "zcopy": true, 00:10:47.077 "get_zone_info": false, 00:10:47.077 "zone_management": false, 00:10:47.077 "zone_append": false, 00:10:47.077 "compare": false, 00:10:47.077 "compare_and_write": false, 00:10:47.077 "abort": true, 00:10:47.077 "seek_hole": false, 00:10:47.077 "seek_data": false, 00:10:47.077 "copy": true, 00:10:47.077 "nvme_iov_md": false 00:10:47.077 }, 00:10:47.077 "memory_domains": [ 00:10:47.077 { 00:10:47.077 "dma_device_id": "system", 00:10:47.077 "dma_device_type": 1 00:10:47.077 }, 00:10:47.077 { 00:10:47.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.077 "dma_device_type": 2 00:10:47.077 } 00:10:47.077 ], 00:10:47.077 "driver_specific": {} 00:10:47.077 } 00:10:47.077 ] 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.077 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.077 "name": "Existed_Raid", 00:10:47.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.077 "strip_size_kb": 64, 00:10:47.077 "state": "configuring", 00:10:47.077 "raid_level": "raid0", 00:10:47.077 "superblock": false, 00:10:47.077 "num_base_bdevs": 4, 00:10:47.077 "num_base_bdevs_discovered": 3, 00:10:47.077 "num_base_bdevs_operational": 4, 00:10:47.077 "base_bdevs_list": [ 00:10:47.077 { 00:10:47.077 "name": "BaseBdev1", 00:10:47.077 "uuid": "85b75b86-4958-44a1-a015-58c7deb6e0da", 00:10:47.077 "is_configured": true, 00:10:47.077 "data_offset": 0, 00:10:47.077 "data_size": 65536 00:10:47.077 }, 00:10:47.077 { 00:10:47.077 "name": null, 00:10:47.077 "uuid": "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea", 00:10:47.077 "is_configured": false, 00:10:47.077 "data_offset": 0, 00:10:47.077 "data_size": 65536 00:10:47.077 }, 00:10:47.077 { 00:10:47.077 "name": "BaseBdev3", 00:10:47.077 "uuid": "e910e5e1-4d9b-48dd-8a1d-af842c90faf5", 00:10:47.078 "is_configured": true, 00:10:47.078 "data_offset": 0, 00:10:47.078 "data_size": 65536 00:10:47.078 }, 00:10:47.078 { 00:10:47.078 "name": "BaseBdev4", 00:10:47.078 "uuid": "fc153128-7c69-4955-92c8-ba615b83ddb7", 00:10:47.078 "is_configured": true, 00:10:47.078 "data_offset": 0, 00:10:47.078 "data_size": 65536 00:10:47.078 } 00:10:47.078 ] 00:10:47.078 }' 00:10:47.078 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.078 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.646 [2024-10-21 09:55:23.988025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.646 09:55:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.646 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.646 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.646 "name": "Existed_Raid", 00:10:47.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.646 "strip_size_kb": 64, 00:10:47.646 "state": "configuring", 00:10:47.646 "raid_level": "raid0", 00:10:47.646 "superblock": false, 00:10:47.646 "num_base_bdevs": 4, 00:10:47.646 "num_base_bdevs_discovered": 2, 00:10:47.646 "num_base_bdevs_operational": 4, 00:10:47.646 "base_bdevs_list": [ 00:10:47.646 { 00:10:47.646 "name": "BaseBdev1", 00:10:47.646 "uuid": "85b75b86-4958-44a1-a015-58c7deb6e0da", 00:10:47.646 "is_configured": true, 00:10:47.646 "data_offset": 0, 00:10:47.646 "data_size": 65536 00:10:47.646 }, 00:10:47.646 { 00:10:47.646 "name": null, 00:10:47.646 "uuid": "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea", 00:10:47.646 "is_configured": false, 00:10:47.646 "data_offset": 0, 00:10:47.646 "data_size": 65536 00:10:47.646 }, 00:10:47.646 { 00:10:47.646 "name": null, 00:10:47.646 "uuid": "e910e5e1-4d9b-48dd-8a1d-af842c90faf5", 00:10:47.646 "is_configured": false, 00:10:47.646 "data_offset": 0, 00:10:47.646 "data_size": 65536 00:10:47.646 }, 00:10:47.646 { 00:10:47.646 "name": "BaseBdev4", 00:10:47.646 "uuid": "fc153128-7c69-4955-92c8-ba615b83ddb7", 00:10:47.646 "is_configured": true, 00:10:47.646 "data_offset": 0, 00:10:47.646 "data_size": 65536 00:10:47.646 } 00:10:47.646 ] 00:10:47.646 }' 00:10:47.646 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.646 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.904 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.904 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.904 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.904 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.904 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.904 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:47.904 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:47.904 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.904 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.905 [2024-10-21 09:55:24.479223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.905 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.163 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.163 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.163 "name": "Existed_Raid", 00:10:48.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.163 "strip_size_kb": 64, 00:10:48.163 "state": "configuring", 00:10:48.163 "raid_level": "raid0", 00:10:48.163 "superblock": false, 00:10:48.163 "num_base_bdevs": 4, 00:10:48.163 "num_base_bdevs_discovered": 3, 00:10:48.163 "num_base_bdevs_operational": 4, 00:10:48.163 "base_bdevs_list": [ 00:10:48.163 { 00:10:48.163 "name": "BaseBdev1", 00:10:48.163 "uuid": "85b75b86-4958-44a1-a015-58c7deb6e0da", 00:10:48.163 "is_configured": true, 00:10:48.163 "data_offset": 0, 00:10:48.163 "data_size": 65536 00:10:48.163 }, 00:10:48.163 { 00:10:48.163 "name": null, 00:10:48.163 "uuid": "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea", 00:10:48.163 "is_configured": false, 00:10:48.163 "data_offset": 0, 00:10:48.163 "data_size": 65536 00:10:48.163 }, 00:10:48.163 { 00:10:48.163 "name": "BaseBdev3", 00:10:48.163 "uuid": "e910e5e1-4d9b-48dd-8a1d-af842c90faf5", 00:10:48.163 "is_configured": true, 00:10:48.163 "data_offset": 0, 00:10:48.163 "data_size": 65536 00:10:48.163 }, 00:10:48.163 { 00:10:48.163 "name": "BaseBdev4", 00:10:48.164 "uuid": "fc153128-7c69-4955-92c8-ba615b83ddb7", 00:10:48.164 "is_configured": true, 00:10:48.164 "data_offset": 0, 00:10:48.164 "data_size": 65536 00:10:48.164 } 00:10:48.164 ] 00:10:48.164 }' 00:10:48.164 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.164 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.422 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.422 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:48.422 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.422 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.422 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.422 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:48.422 09:55:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:48.422 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.422 09:55:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.422 [2024-10-21 09:55:24.950532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.680 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.680 "name": "Existed_Raid", 00:10:48.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.680 "strip_size_kb": 64, 00:10:48.680 "state": "configuring", 00:10:48.680 "raid_level": "raid0", 00:10:48.680 "superblock": false, 00:10:48.680 "num_base_bdevs": 4, 00:10:48.680 "num_base_bdevs_discovered": 2, 00:10:48.680 "num_base_bdevs_operational": 4, 00:10:48.680 "base_bdevs_list": [ 00:10:48.680 { 00:10:48.680 "name": null, 00:10:48.680 "uuid": "85b75b86-4958-44a1-a015-58c7deb6e0da", 00:10:48.680 "is_configured": false, 00:10:48.680 "data_offset": 0, 00:10:48.680 "data_size": 65536 00:10:48.680 }, 00:10:48.680 { 00:10:48.680 "name": null, 00:10:48.680 "uuid": "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea", 00:10:48.680 "is_configured": false, 00:10:48.680 "data_offset": 0, 00:10:48.680 "data_size": 65536 00:10:48.680 }, 00:10:48.680 { 00:10:48.680 "name": "BaseBdev3", 00:10:48.680 "uuid": "e910e5e1-4d9b-48dd-8a1d-af842c90faf5", 00:10:48.680 "is_configured": true, 00:10:48.680 "data_offset": 0, 00:10:48.680 "data_size": 65536 00:10:48.680 }, 00:10:48.680 { 00:10:48.680 "name": "BaseBdev4", 00:10:48.681 "uuid": "fc153128-7c69-4955-92c8-ba615b83ddb7", 00:10:48.681 "is_configured": true, 00:10:48.681 "data_offset": 0, 00:10:48.681 "data_size": 65536 00:10:48.681 } 00:10:48.681 ] 00:10:48.681 }' 00:10:48.681 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.681 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.939 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.939 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.939 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.939 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.939 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.198 [2024-10-21 09:55:25.562655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.198 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.199 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.199 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.199 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.199 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.199 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.199 "name": "Existed_Raid", 00:10:49.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.199 "strip_size_kb": 64, 00:10:49.199 "state": "configuring", 00:10:49.199 "raid_level": "raid0", 00:10:49.199 "superblock": false, 00:10:49.199 "num_base_bdevs": 4, 00:10:49.199 "num_base_bdevs_discovered": 3, 00:10:49.199 "num_base_bdevs_operational": 4, 00:10:49.199 "base_bdevs_list": [ 00:10:49.199 { 00:10:49.199 "name": null, 00:10:49.199 "uuid": "85b75b86-4958-44a1-a015-58c7deb6e0da", 00:10:49.199 "is_configured": false, 00:10:49.199 "data_offset": 0, 00:10:49.199 "data_size": 65536 00:10:49.199 }, 00:10:49.199 { 00:10:49.199 "name": "BaseBdev2", 00:10:49.199 "uuid": "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea", 00:10:49.199 "is_configured": true, 00:10:49.199 "data_offset": 0, 00:10:49.199 "data_size": 65536 00:10:49.199 }, 00:10:49.199 { 00:10:49.199 "name": "BaseBdev3", 00:10:49.199 "uuid": "e910e5e1-4d9b-48dd-8a1d-af842c90faf5", 00:10:49.199 "is_configured": true, 00:10:49.199 "data_offset": 0, 00:10:49.199 "data_size": 65536 00:10:49.199 }, 00:10:49.199 { 00:10:49.199 "name": "BaseBdev4", 00:10:49.199 "uuid": "fc153128-7c69-4955-92c8-ba615b83ddb7", 00:10:49.199 "is_configured": true, 00:10:49.199 "data_offset": 0, 00:10:49.199 "data_size": 65536 00:10:49.199 } 00:10:49.199 ] 00:10:49.199 }' 00:10:49.199 09:55:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.199 09:55:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.457 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.457 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:49.457 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.457 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.457 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 85b75b86-4958-44a1-a015-58c7deb6e0da 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.717 [2024-10-21 09:55:26.159597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:49.717 [2024-10-21 09:55:26.159735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:10:49.717 [2024-10-21 09:55:26.159751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:49.717 [2024-10-21 09:55:26.160067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:49.717 [2024-10-21 09:55:26.160261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:10:49.717 [2024-10-21 09:55:26.160278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:10:49.717 [2024-10-21 09:55:26.160551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.717 NewBaseBdev 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.717 [ 00:10:49.717 { 00:10:49.717 "name": "NewBaseBdev", 00:10:49.717 "aliases": [ 00:10:49.717 "85b75b86-4958-44a1-a015-58c7deb6e0da" 00:10:49.717 ], 00:10:49.717 "product_name": "Malloc disk", 00:10:49.717 "block_size": 512, 00:10:49.717 "num_blocks": 65536, 00:10:49.717 "uuid": "85b75b86-4958-44a1-a015-58c7deb6e0da", 00:10:49.717 "assigned_rate_limits": { 00:10:49.717 "rw_ios_per_sec": 0, 00:10:49.717 "rw_mbytes_per_sec": 0, 00:10:49.717 "r_mbytes_per_sec": 0, 00:10:49.717 "w_mbytes_per_sec": 0 00:10:49.717 }, 00:10:49.717 "claimed": true, 00:10:49.717 "claim_type": "exclusive_write", 00:10:49.717 "zoned": false, 00:10:49.717 "supported_io_types": { 00:10:49.717 "read": true, 00:10:49.717 "write": true, 00:10:49.717 "unmap": true, 00:10:49.717 "flush": true, 00:10:49.717 "reset": true, 00:10:49.717 "nvme_admin": false, 00:10:49.717 "nvme_io": false, 00:10:49.717 "nvme_io_md": false, 00:10:49.717 "write_zeroes": true, 00:10:49.717 "zcopy": true, 00:10:49.717 "get_zone_info": false, 00:10:49.717 "zone_management": false, 00:10:49.717 "zone_append": false, 00:10:49.717 "compare": false, 00:10:49.717 "compare_and_write": false, 00:10:49.717 "abort": true, 00:10:49.717 "seek_hole": false, 00:10:49.717 "seek_data": false, 00:10:49.717 "copy": true, 00:10:49.717 "nvme_iov_md": false 00:10:49.717 }, 00:10:49.717 "memory_domains": [ 00:10:49.717 { 00:10:49.717 "dma_device_id": "system", 00:10:49.717 "dma_device_type": 1 00:10:49.717 }, 00:10:49.717 { 00:10:49.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.717 "dma_device_type": 2 00:10:49.717 } 00:10:49.717 ], 00:10:49.717 "driver_specific": {} 00:10:49.717 } 00:10:49.717 ] 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.717 "name": "Existed_Raid", 00:10:49.717 "uuid": "92542ede-a0c8-42aa-9714-febced9f5507", 00:10:49.717 "strip_size_kb": 64, 00:10:49.717 "state": "online", 00:10:49.717 "raid_level": "raid0", 00:10:49.717 "superblock": false, 00:10:49.717 "num_base_bdevs": 4, 00:10:49.717 "num_base_bdevs_discovered": 4, 00:10:49.717 "num_base_bdevs_operational": 4, 00:10:49.717 "base_bdevs_list": [ 00:10:49.717 { 00:10:49.717 "name": "NewBaseBdev", 00:10:49.717 "uuid": "85b75b86-4958-44a1-a015-58c7deb6e0da", 00:10:49.717 "is_configured": true, 00:10:49.717 "data_offset": 0, 00:10:49.717 "data_size": 65536 00:10:49.717 }, 00:10:49.717 { 00:10:49.717 "name": "BaseBdev2", 00:10:49.717 "uuid": "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea", 00:10:49.717 "is_configured": true, 00:10:49.717 "data_offset": 0, 00:10:49.717 "data_size": 65536 00:10:49.717 }, 00:10:49.717 { 00:10:49.717 "name": "BaseBdev3", 00:10:49.717 "uuid": "e910e5e1-4d9b-48dd-8a1d-af842c90faf5", 00:10:49.717 "is_configured": true, 00:10:49.717 "data_offset": 0, 00:10:49.717 "data_size": 65536 00:10:49.717 }, 00:10:49.717 { 00:10:49.717 "name": "BaseBdev4", 00:10:49.717 "uuid": "fc153128-7c69-4955-92c8-ba615b83ddb7", 00:10:49.717 "is_configured": true, 00:10:49.717 "data_offset": 0, 00:10:49.717 "data_size": 65536 00:10:49.717 } 00:10:49.717 ] 00:10:49.717 }' 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.717 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.284 [2024-10-21 09:55:26.671218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.284 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.284 "name": "Existed_Raid", 00:10:50.284 "aliases": [ 00:10:50.284 "92542ede-a0c8-42aa-9714-febced9f5507" 00:10:50.284 ], 00:10:50.284 "product_name": "Raid Volume", 00:10:50.284 "block_size": 512, 00:10:50.284 "num_blocks": 262144, 00:10:50.284 "uuid": "92542ede-a0c8-42aa-9714-febced9f5507", 00:10:50.284 "assigned_rate_limits": { 00:10:50.284 "rw_ios_per_sec": 0, 00:10:50.284 "rw_mbytes_per_sec": 0, 00:10:50.284 "r_mbytes_per_sec": 0, 00:10:50.284 "w_mbytes_per_sec": 0 00:10:50.284 }, 00:10:50.285 "claimed": false, 00:10:50.285 "zoned": false, 00:10:50.285 "supported_io_types": { 00:10:50.285 "read": true, 00:10:50.285 "write": true, 00:10:50.285 "unmap": true, 00:10:50.285 "flush": true, 00:10:50.285 "reset": true, 00:10:50.285 "nvme_admin": false, 00:10:50.285 "nvme_io": false, 00:10:50.285 "nvme_io_md": false, 00:10:50.285 "write_zeroes": true, 00:10:50.285 "zcopy": false, 00:10:50.285 "get_zone_info": false, 00:10:50.285 "zone_management": false, 00:10:50.285 "zone_append": false, 00:10:50.285 "compare": false, 00:10:50.285 "compare_and_write": false, 00:10:50.285 "abort": false, 00:10:50.285 "seek_hole": false, 00:10:50.285 "seek_data": false, 00:10:50.285 "copy": false, 00:10:50.285 "nvme_iov_md": false 00:10:50.285 }, 00:10:50.285 "memory_domains": [ 00:10:50.285 { 00:10:50.285 "dma_device_id": "system", 00:10:50.285 "dma_device_type": 1 00:10:50.285 }, 00:10:50.285 { 00:10:50.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.285 "dma_device_type": 2 00:10:50.285 }, 00:10:50.285 { 00:10:50.285 "dma_device_id": "system", 00:10:50.285 "dma_device_type": 1 00:10:50.285 }, 00:10:50.285 { 00:10:50.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.285 "dma_device_type": 2 00:10:50.285 }, 00:10:50.285 { 00:10:50.285 "dma_device_id": "system", 00:10:50.285 "dma_device_type": 1 00:10:50.285 }, 00:10:50.285 { 00:10:50.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.285 "dma_device_type": 2 00:10:50.285 }, 00:10:50.285 { 00:10:50.285 "dma_device_id": "system", 00:10:50.285 "dma_device_type": 1 00:10:50.285 }, 00:10:50.285 { 00:10:50.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.285 "dma_device_type": 2 00:10:50.285 } 00:10:50.285 ], 00:10:50.285 "driver_specific": { 00:10:50.285 "raid": { 00:10:50.285 "uuid": "92542ede-a0c8-42aa-9714-febced9f5507", 00:10:50.285 "strip_size_kb": 64, 00:10:50.285 "state": "online", 00:10:50.285 "raid_level": "raid0", 00:10:50.285 "superblock": false, 00:10:50.285 "num_base_bdevs": 4, 00:10:50.285 "num_base_bdevs_discovered": 4, 00:10:50.285 "num_base_bdevs_operational": 4, 00:10:50.285 "base_bdevs_list": [ 00:10:50.285 { 00:10:50.285 "name": "NewBaseBdev", 00:10:50.285 "uuid": "85b75b86-4958-44a1-a015-58c7deb6e0da", 00:10:50.285 "is_configured": true, 00:10:50.285 "data_offset": 0, 00:10:50.285 "data_size": 65536 00:10:50.285 }, 00:10:50.285 { 00:10:50.285 "name": "BaseBdev2", 00:10:50.285 "uuid": "e57fa4d1-bd5a-4ba0-ac22-65c211ee65ea", 00:10:50.285 "is_configured": true, 00:10:50.285 "data_offset": 0, 00:10:50.285 "data_size": 65536 00:10:50.285 }, 00:10:50.285 { 00:10:50.285 "name": "BaseBdev3", 00:10:50.285 "uuid": "e910e5e1-4d9b-48dd-8a1d-af842c90faf5", 00:10:50.285 "is_configured": true, 00:10:50.285 "data_offset": 0, 00:10:50.285 "data_size": 65536 00:10:50.285 }, 00:10:50.285 { 00:10:50.285 "name": "BaseBdev4", 00:10:50.285 "uuid": "fc153128-7c69-4955-92c8-ba615b83ddb7", 00:10:50.285 "is_configured": true, 00:10:50.285 "data_offset": 0, 00:10:50.285 "data_size": 65536 00:10:50.285 } 00:10:50.285 ] 00:10:50.285 } 00:10:50.285 } 00:10:50.285 }' 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:50.285 BaseBdev2 00:10:50.285 BaseBdev3 00:10:50.285 BaseBdev4' 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.285 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.544 09:55:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.544 [2024-10-21 09:55:27.030276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.544 [2024-10-21 09:55:27.030310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.544 [2024-10-21 09:55:27.030406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.544 [2024-10-21 09:55:27.030482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.544 [2024-10-21 09:55:27.030495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 68949 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 68949 ']' 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 68949 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68949 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68949' 00:10:50.544 killing process with pid 68949 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 68949 00:10:50.544 [2024-10-21 09:55:27.079305] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.544 09:55:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 68949 00:10:51.110 [2024-10-21 09:55:27.541280] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:52.488 00:10:52.488 real 0m12.212s 00:10:52.488 user 0m19.389s 00:10:52.488 sys 0m2.051s 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.488 ************************************ 00:10:52.488 END TEST raid_state_function_test 00:10:52.488 ************************************ 00:10:52.488 09:55:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:52.488 09:55:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:52.488 09:55:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.488 09:55:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.488 ************************************ 00:10:52.488 START TEST raid_state_function_test_sb 00:10:52.488 ************************************ 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69626 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69626' 00:10:52.488 Process raid pid: 69626 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69626 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 69626 ']' 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:52.488 09:55:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.488 [2024-10-21 09:55:28.911602] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:10:52.488 [2024-10-21 09:55:28.911792] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.489 [2024-10-21 09:55:29.073216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.748 [2024-10-21 09:55:29.196402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.007 [2024-10-21 09:55:29.425305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.007 [2024-10-21 09:55:29.425428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.268 [2024-10-21 09:55:29.755915] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.268 [2024-10-21 09:55:29.756026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.268 [2024-10-21 09:55:29.756073] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.268 [2024-10-21 09:55:29.756097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.268 [2024-10-21 09:55:29.756116] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.268 [2024-10-21 09:55:29.756137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.268 [2024-10-21 09:55:29.756155] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.268 [2024-10-21 09:55:29.756210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.268 "name": "Existed_Raid", 00:10:53.268 "uuid": "eafc9a72-8d3e-45ac-8768-4af9901f9329", 00:10:53.268 "strip_size_kb": 64, 00:10:53.268 "state": "configuring", 00:10:53.268 "raid_level": "raid0", 00:10:53.268 "superblock": true, 00:10:53.268 "num_base_bdevs": 4, 00:10:53.268 "num_base_bdevs_discovered": 0, 00:10:53.268 "num_base_bdevs_operational": 4, 00:10:53.268 "base_bdevs_list": [ 00:10:53.268 { 00:10:53.268 "name": "BaseBdev1", 00:10:53.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.268 "is_configured": false, 00:10:53.268 "data_offset": 0, 00:10:53.268 "data_size": 0 00:10:53.268 }, 00:10:53.268 { 00:10:53.268 "name": "BaseBdev2", 00:10:53.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.268 "is_configured": false, 00:10:53.268 "data_offset": 0, 00:10:53.268 "data_size": 0 00:10:53.268 }, 00:10:53.268 { 00:10:53.268 "name": "BaseBdev3", 00:10:53.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.268 "is_configured": false, 00:10:53.268 "data_offset": 0, 00:10:53.268 "data_size": 0 00:10:53.268 }, 00:10:53.268 { 00:10:53.268 "name": "BaseBdev4", 00:10:53.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.268 "is_configured": false, 00:10:53.268 "data_offset": 0, 00:10:53.268 "data_size": 0 00:10:53.268 } 00:10:53.268 ] 00:10:53.268 }' 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.268 09:55:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.838 [2024-10-21 09:55:30.155148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.838 [2024-10-21 09:55:30.155258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.838 [2024-10-21 09:55:30.167147] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.838 [2024-10-21 09:55:30.167190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.838 [2024-10-21 09:55:30.167199] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.838 [2024-10-21 09:55:30.167208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.838 [2024-10-21 09:55:30.167214] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.838 [2024-10-21 09:55:30.167222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.838 [2024-10-21 09:55:30.167228] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.838 [2024-10-21 09:55:30.167236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.838 [2024-10-21 09:55:30.218070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.838 BaseBdev1 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.838 [ 00:10:53.838 { 00:10:53.838 "name": "BaseBdev1", 00:10:53.838 "aliases": [ 00:10:53.838 "1cd66f03-5603-4a3b-914e-b7e2373c7a8c" 00:10:53.838 ], 00:10:53.838 "product_name": "Malloc disk", 00:10:53.838 "block_size": 512, 00:10:53.838 "num_blocks": 65536, 00:10:53.838 "uuid": "1cd66f03-5603-4a3b-914e-b7e2373c7a8c", 00:10:53.838 "assigned_rate_limits": { 00:10:53.838 "rw_ios_per_sec": 0, 00:10:53.838 "rw_mbytes_per_sec": 0, 00:10:53.838 "r_mbytes_per_sec": 0, 00:10:53.838 "w_mbytes_per_sec": 0 00:10:53.838 }, 00:10:53.838 "claimed": true, 00:10:53.838 "claim_type": "exclusive_write", 00:10:53.838 "zoned": false, 00:10:53.838 "supported_io_types": { 00:10:53.838 "read": true, 00:10:53.838 "write": true, 00:10:53.838 "unmap": true, 00:10:53.838 "flush": true, 00:10:53.838 "reset": true, 00:10:53.838 "nvme_admin": false, 00:10:53.838 "nvme_io": false, 00:10:53.838 "nvme_io_md": false, 00:10:53.838 "write_zeroes": true, 00:10:53.838 "zcopy": true, 00:10:53.838 "get_zone_info": false, 00:10:53.838 "zone_management": false, 00:10:53.838 "zone_append": false, 00:10:53.838 "compare": false, 00:10:53.838 "compare_and_write": false, 00:10:53.838 "abort": true, 00:10:53.838 "seek_hole": false, 00:10:53.838 "seek_data": false, 00:10:53.838 "copy": true, 00:10:53.838 "nvme_iov_md": false 00:10:53.838 }, 00:10:53.838 "memory_domains": [ 00:10:53.838 { 00:10:53.838 "dma_device_id": "system", 00:10:53.838 "dma_device_type": 1 00:10:53.838 }, 00:10:53.838 { 00:10:53.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.838 "dma_device_type": 2 00:10:53.838 } 00:10:53.838 ], 00:10:53.838 "driver_specific": {} 00:10:53.838 } 00:10:53.838 ] 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.838 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.839 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.839 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.839 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.839 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.839 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.839 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.839 "name": "Existed_Raid", 00:10:53.839 "uuid": "c3530569-9293-4764-98c6-bc20cbd3ba13", 00:10:53.839 "strip_size_kb": 64, 00:10:53.839 "state": "configuring", 00:10:53.839 "raid_level": "raid0", 00:10:53.839 "superblock": true, 00:10:53.839 "num_base_bdevs": 4, 00:10:53.839 "num_base_bdevs_discovered": 1, 00:10:53.839 "num_base_bdevs_operational": 4, 00:10:53.839 "base_bdevs_list": [ 00:10:53.839 { 00:10:53.839 "name": "BaseBdev1", 00:10:53.839 "uuid": "1cd66f03-5603-4a3b-914e-b7e2373c7a8c", 00:10:53.839 "is_configured": true, 00:10:53.839 "data_offset": 2048, 00:10:53.839 "data_size": 63488 00:10:53.839 }, 00:10:53.839 { 00:10:53.839 "name": "BaseBdev2", 00:10:53.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.839 "is_configured": false, 00:10:53.839 "data_offset": 0, 00:10:53.839 "data_size": 0 00:10:53.839 }, 00:10:53.839 { 00:10:53.839 "name": "BaseBdev3", 00:10:53.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.839 "is_configured": false, 00:10:53.839 "data_offset": 0, 00:10:53.839 "data_size": 0 00:10:53.839 }, 00:10:53.839 { 00:10:53.839 "name": "BaseBdev4", 00:10:53.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.839 "is_configured": false, 00:10:53.839 "data_offset": 0, 00:10:53.839 "data_size": 0 00:10:53.839 } 00:10:53.839 ] 00:10:53.839 }' 00:10:53.839 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.839 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.098 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.098 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.098 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.098 [2024-10-21 09:55:30.665382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.098 [2024-10-21 09:55:30.665511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.099 [2024-10-21 09:55:30.677413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.099 [2024-10-21 09:55:30.679454] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.099 [2024-10-21 09:55:30.679503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.099 [2024-10-21 09:55:30.679515] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.099 [2024-10-21 09:55:30.679527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.099 [2024-10-21 09:55:30.679535] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.099 [2024-10-21 09:55:30.679544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.099 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.357 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.357 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.357 "name": "Existed_Raid", 00:10:54.357 "uuid": "79275b4e-c103-49e7-b984-2b2f19e7b380", 00:10:54.357 "strip_size_kb": 64, 00:10:54.357 "state": "configuring", 00:10:54.357 "raid_level": "raid0", 00:10:54.357 "superblock": true, 00:10:54.357 "num_base_bdevs": 4, 00:10:54.357 "num_base_bdevs_discovered": 1, 00:10:54.357 "num_base_bdevs_operational": 4, 00:10:54.357 "base_bdevs_list": [ 00:10:54.357 { 00:10:54.357 "name": "BaseBdev1", 00:10:54.357 "uuid": "1cd66f03-5603-4a3b-914e-b7e2373c7a8c", 00:10:54.357 "is_configured": true, 00:10:54.357 "data_offset": 2048, 00:10:54.357 "data_size": 63488 00:10:54.357 }, 00:10:54.357 { 00:10:54.357 "name": "BaseBdev2", 00:10:54.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.357 "is_configured": false, 00:10:54.357 "data_offset": 0, 00:10:54.357 "data_size": 0 00:10:54.357 }, 00:10:54.357 { 00:10:54.357 "name": "BaseBdev3", 00:10:54.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.357 "is_configured": false, 00:10:54.357 "data_offset": 0, 00:10:54.357 "data_size": 0 00:10:54.357 }, 00:10:54.357 { 00:10:54.357 "name": "BaseBdev4", 00:10:54.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.357 "is_configured": false, 00:10:54.357 "data_offset": 0, 00:10:54.357 "data_size": 0 00:10:54.357 } 00:10:54.357 ] 00:10:54.357 }' 00:10:54.357 09:55:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.357 09:55:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.615 [2024-10-21 09:55:31.200216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.615 BaseBdev2 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.615 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.874 [ 00:10:54.874 { 00:10:54.874 "name": "BaseBdev2", 00:10:54.874 "aliases": [ 00:10:54.874 "8f522f79-9c6a-4ef9-a531-f4b2d5620e22" 00:10:54.874 ], 00:10:54.874 "product_name": "Malloc disk", 00:10:54.874 "block_size": 512, 00:10:54.874 "num_blocks": 65536, 00:10:54.874 "uuid": "8f522f79-9c6a-4ef9-a531-f4b2d5620e22", 00:10:54.874 "assigned_rate_limits": { 00:10:54.874 "rw_ios_per_sec": 0, 00:10:54.874 "rw_mbytes_per_sec": 0, 00:10:54.874 "r_mbytes_per_sec": 0, 00:10:54.874 "w_mbytes_per_sec": 0 00:10:54.874 }, 00:10:54.874 "claimed": true, 00:10:54.874 "claim_type": "exclusive_write", 00:10:54.874 "zoned": false, 00:10:54.874 "supported_io_types": { 00:10:54.874 "read": true, 00:10:54.874 "write": true, 00:10:54.874 "unmap": true, 00:10:54.874 "flush": true, 00:10:54.874 "reset": true, 00:10:54.874 "nvme_admin": false, 00:10:54.874 "nvme_io": false, 00:10:54.874 "nvme_io_md": false, 00:10:54.874 "write_zeroes": true, 00:10:54.874 "zcopy": true, 00:10:54.874 "get_zone_info": false, 00:10:54.874 "zone_management": false, 00:10:54.874 "zone_append": false, 00:10:54.874 "compare": false, 00:10:54.874 "compare_and_write": false, 00:10:54.874 "abort": true, 00:10:54.874 "seek_hole": false, 00:10:54.874 "seek_data": false, 00:10:54.874 "copy": true, 00:10:54.874 "nvme_iov_md": false 00:10:54.874 }, 00:10:54.874 "memory_domains": [ 00:10:54.874 { 00:10:54.874 "dma_device_id": "system", 00:10:54.874 "dma_device_type": 1 00:10:54.874 }, 00:10:54.874 { 00:10:54.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.874 "dma_device_type": 2 00:10:54.874 } 00:10:54.874 ], 00:10:54.874 "driver_specific": {} 00:10:54.874 } 00:10:54.874 ] 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.874 "name": "Existed_Raid", 00:10:54.874 "uuid": "79275b4e-c103-49e7-b984-2b2f19e7b380", 00:10:54.874 "strip_size_kb": 64, 00:10:54.874 "state": "configuring", 00:10:54.874 "raid_level": "raid0", 00:10:54.874 "superblock": true, 00:10:54.874 "num_base_bdevs": 4, 00:10:54.874 "num_base_bdevs_discovered": 2, 00:10:54.874 "num_base_bdevs_operational": 4, 00:10:54.874 "base_bdevs_list": [ 00:10:54.874 { 00:10:54.874 "name": "BaseBdev1", 00:10:54.874 "uuid": "1cd66f03-5603-4a3b-914e-b7e2373c7a8c", 00:10:54.874 "is_configured": true, 00:10:54.874 "data_offset": 2048, 00:10:54.874 "data_size": 63488 00:10:54.874 }, 00:10:54.874 { 00:10:54.874 "name": "BaseBdev2", 00:10:54.874 "uuid": "8f522f79-9c6a-4ef9-a531-f4b2d5620e22", 00:10:54.874 "is_configured": true, 00:10:54.874 "data_offset": 2048, 00:10:54.874 "data_size": 63488 00:10:54.874 }, 00:10:54.874 { 00:10:54.874 "name": "BaseBdev3", 00:10:54.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.874 "is_configured": false, 00:10:54.874 "data_offset": 0, 00:10:54.874 "data_size": 0 00:10:54.874 }, 00:10:54.874 { 00:10:54.874 "name": "BaseBdev4", 00:10:54.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.874 "is_configured": false, 00:10:54.874 "data_offset": 0, 00:10:54.874 "data_size": 0 00:10:54.874 } 00:10:54.874 ] 00:10:54.874 }' 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.874 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.132 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:55.132 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.132 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.426 [2024-10-21 09:55:31.764315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.426 BaseBdev3 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.426 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.426 [ 00:10:55.426 { 00:10:55.426 "name": "BaseBdev3", 00:10:55.426 "aliases": [ 00:10:55.426 "e0ab4c75-de68-4629-a55c-26d67e6e2e71" 00:10:55.426 ], 00:10:55.426 "product_name": "Malloc disk", 00:10:55.426 "block_size": 512, 00:10:55.426 "num_blocks": 65536, 00:10:55.426 "uuid": "e0ab4c75-de68-4629-a55c-26d67e6e2e71", 00:10:55.426 "assigned_rate_limits": { 00:10:55.426 "rw_ios_per_sec": 0, 00:10:55.426 "rw_mbytes_per_sec": 0, 00:10:55.426 "r_mbytes_per_sec": 0, 00:10:55.426 "w_mbytes_per_sec": 0 00:10:55.426 }, 00:10:55.426 "claimed": true, 00:10:55.426 "claim_type": "exclusive_write", 00:10:55.426 "zoned": false, 00:10:55.426 "supported_io_types": { 00:10:55.426 "read": true, 00:10:55.426 "write": true, 00:10:55.426 "unmap": true, 00:10:55.426 "flush": true, 00:10:55.426 "reset": true, 00:10:55.426 "nvme_admin": false, 00:10:55.426 "nvme_io": false, 00:10:55.426 "nvme_io_md": false, 00:10:55.426 "write_zeroes": true, 00:10:55.426 "zcopy": true, 00:10:55.426 "get_zone_info": false, 00:10:55.426 "zone_management": false, 00:10:55.426 "zone_append": false, 00:10:55.426 "compare": false, 00:10:55.426 "compare_and_write": false, 00:10:55.426 "abort": true, 00:10:55.426 "seek_hole": false, 00:10:55.426 "seek_data": false, 00:10:55.426 "copy": true, 00:10:55.426 "nvme_iov_md": false 00:10:55.426 }, 00:10:55.427 "memory_domains": [ 00:10:55.427 { 00:10:55.427 "dma_device_id": "system", 00:10:55.427 "dma_device_type": 1 00:10:55.427 }, 00:10:55.427 { 00:10:55.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.427 "dma_device_type": 2 00:10:55.427 } 00:10:55.427 ], 00:10:55.427 "driver_specific": {} 00:10:55.427 } 00:10:55.427 ] 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.427 "name": "Existed_Raid", 00:10:55.427 "uuid": "79275b4e-c103-49e7-b984-2b2f19e7b380", 00:10:55.427 "strip_size_kb": 64, 00:10:55.427 "state": "configuring", 00:10:55.427 "raid_level": "raid0", 00:10:55.427 "superblock": true, 00:10:55.427 "num_base_bdevs": 4, 00:10:55.427 "num_base_bdevs_discovered": 3, 00:10:55.427 "num_base_bdevs_operational": 4, 00:10:55.427 "base_bdevs_list": [ 00:10:55.427 { 00:10:55.427 "name": "BaseBdev1", 00:10:55.427 "uuid": "1cd66f03-5603-4a3b-914e-b7e2373c7a8c", 00:10:55.427 "is_configured": true, 00:10:55.427 "data_offset": 2048, 00:10:55.427 "data_size": 63488 00:10:55.427 }, 00:10:55.427 { 00:10:55.427 "name": "BaseBdev2", 00:10:55.427 "uuid": "8f522f79-9c6a-4ef9-a531-f4b2d5620e22", 00:10:55.427 "is_configured": true, 00:10:55.427 "data_offset": 2048, 00:10:55.427 "data_size": 63488 00:10:55.427 }, 00:10:55.427 { 00:10:55.427 "name": "BaseBdev3", 00:10:55.427 "uuid": "e0ab4c75-de68-4629-a55c-26d67e6e2e71", 00:10:55.427 "is_configured": true, 00:10:55.427 "data_offset": 2048, 00:10:55.427 "data_size": 63488 00:10:55.427 }, 00:10:55.427 { 00:10:55.427 "name": "BaseBdev4", 00:10:55.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.427 "is_configured": false, 00:10:55.427 "data_offset": 0, 00:10:55.427 "data_size": 0 00:10:55.427 } 00:10:55.427 ] 00:10:55.427 }' 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.427 09:55:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.687 [2024-10-21 09:55:32.259520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.687 [2024-10-21 09:55:32.259790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:55.687 [2024-10-21 09:55:32.259805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.687 BaseBdev4 00:10:55.687 [2024-10-21 09:55:32.260121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:55.687 [2024-10-21 09:55:32.260286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:55.687 [2024-10-21 09:55:32.260300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:10:55.687 [2024-10-21 09:55:32.260436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.687 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.946 [ 00:10:55.946 { 00:10:55.946 "name": "BaseBdev4", 00:10:55.946 "aliases": [ 00:10:55.946 "cda16ef4-7c33-40eb-af63-5fe83e075409" 00:10:55.946 ], 00:10:55.946 "product_name": "Malloc disk", 00:10:55.946 "block_size": 512, 00:10:55.946 "num_blocks": 65536, 00:10:55.946 "uuid": "cda16ef4-7c33-40eb-af63-5fe83e075409", 00:10:55.946 "assigned_rate_limits": { 00:10:55.946 "rw_ios_per_sec": 0, 00:10:55.946 "rw_mbytes_per_sec": 0, 00:10:55.946 "r_mbytes_per_sec": 0, 00:10:55.946 "w_mbytes_per_sec": 0 00:10:55.946 }, 00:10:55.946 "claimed": true, 00:10:55.946 "claim_type": "exclusive_write", 00:10:55.946 "zoned": false, 00:10:55.946 "supported_io_types": { 00:10:55.946 "read": true, 00:10:55.946 "write": true, 00:10:55.946 "unmap": true, 00:10:55.946 "flush": true, 00:10:55.946 "reset": true, 00:10:55.946 "nvme_admin": false, 00:10:55.946 "nvme_io": false, 00:10:55.946 "nvme_io_md": false, 00:10:55.946 "write_zeroes": true, 00:10:55.946 "zcopy": true, 00:10:55.946 "get_zone_info": false, 00:10:55.946 "zone_management": false, 00:10:55.946 "zone_append": false, 00:10:55.946 "compare": false, 00:10:55.946 "compare_and_write": false, 00:10:55.946 "abort": true, 00:10:55.946 "seek_hole": false, 00:10:55.946 "seek_data": false, 00:10:55.946 "copy": true, 00:10:55.946 "nvme_iov_md": false 00:10:55.946 }, 00:10:55.946 "memory_domains": [ 00:10:55.946 { 00:10:55.946 "dma_device_id": "system", 00:10:55.946 "dma_device_type": 1 00:10:55.946 }, 00:10:55.946 { 00:10:55.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.946 "dma_device_type": 2 00:10:55.946 } 00:10:55.946 ], 00:10:55.946 "driver_specific": {} 00:10:55.946 } 00:10:55.946 ] 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.946 "name": "Existed_Raid", 00:10:55.946 "uuid": "79275b4e-c103-49e7-b984-2b2f19e7b380", 00:10:55.946 "strip_size_kb": 64, 00:10:55.946 "state": "online", 00:10:55.946 "raid_level": "raid0", 00:10:55.946 "superblock": true, 00:10:55.946 "num_base_bdevs": 4, 00:10:55.946 "num_base_bdevs_discovered": 4, 00:10:55.946 "num_base_bdevs_operational": 4, 00:10:55.946 "base_bdevs_list": [ 00:10:55.946 { 00:10:55.946 "name": "BaseBdev1", 00:10:55.946 "uuid": "1cd66f03-5603-4a3b-914e-b7e2373c7a8c", 00:10:55.946 "is_configured": true, 00:10:55.946 "data_offset": 2048, 00:10:55.946 "data_size": 63488 00:10:55.946 }, 00:10:55.946 { 00:10:55.946 "name": "BaseBdev2", 00:10:55.946 "uuid": "8f522f79-9c6a-4ef9-a531-f4b2d5620e22", 00:10:55.946 "is_configured": true, 00:10:55.946 "data_offset": 2048, 00:10:55.946 "data_size": 63488 00:10:55.946 }, 00:10:55.946 { 00:10:55.946 "name": "BaseBdev3", 00:10:55.946 "uuid": "e0ab4c75-de68-4629-a55c-26d67e6e2e71", 00:10:55.946 "is_configured": true, 00:10:55.946 "data_offset": 2048, 00:10:55.946 "data_size": 63488 00:10:55.946 }, 00:10:55.946 { 00:10:55.946 "name": "BaseBdev4", 00:10:55.946 "uuid": "cda16ef4-7c33-40eb-af63-5fe83e075409", 00:10:55.946 "is_configured": true, 00:10:55.946 "data_offset": 2048, 00:10:55.946 "data_size": 63488 00:10:55.946 } 00:10:55.946 ] 00:10:55.946 }' 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.946 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.204 [2024-10-21 09:55:32.711272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.204 "name": "Existed_Raid", 00:10:56.204 "aliases": [ 00:10:56.204 "79275b4e-c103-49e7-b984-2b2f19e7b380" 00:10:56.204 ], 00:10:56.204 "product_name": "Raid Volume", 00:10:56.204 "block_size": 512, 00:10:56.204 "num_blocks": 253952, 00:10:56.204 "uuid": "79275b4e-c103-49e7-b984-2b2f19e7b380", 00:10:56.204 "assigned_rate_limits": { 00:10:56.204 "rw_ios_per_sec": 0, 00:10:56.204 "rw_mbytes_per_sec": 0, 00:10:56.204 "r_mbytes_per_sec": 0, 00:10:56.204 "w_mbytes_per_sec": 0 00:10:56.204 }, 00:10:56.204 "claimed": false, 00:10:56.204 "zoned": false, 00:10:56.204 "supported_io_types": { 00:10:56.204 "read": true, 00:10:56.204 "write": true, 00:10:56.204 "unmap": true, 00:10:56.204 "flush": true, 00:10:56.204 "reset": true, 00:10:56.204 "nvme_admin": false, 00:10:56.204 "nvme_io": false, 00:10:56.204 "nvme_io_md": false, 00:10:56.204 "write_zeroes": true, 00:10:56.204 "zcopy": false, 00:10:56.204 "get_zone_info": false, 00:10:56.204 "zone_management": false, 00:10:56.204 "zone_append": false, 00:10:56.204 "compare": false, 00:10:56.204 "compare_and_write": false, 00:10:56.204 "abort": false, 00:10:56.204 "seek_hole": false, 00:10:56.204 "seek_data": false, 00:10:56.204 "copy": false, 00:10:56.204 "nvme_iov_md": false 00:10:56.204 }, 00:10:56.204 "memory_domains": [ 00:10:56.204 { 00:10:56.204 "dma_device_id": "system", 00:10:56.204 "dma_device_type": 1 00:10:56.204 }, 00:10:56.204 { 00:10:56.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.204 "dma_device_type": 2 00:10:56.204 }, 00:10:56.204 { 00:10:56.204 "dma_device_id": "system", 00:10:56.204 "dma_device_type": 1 00:10:56.204 }, 00:10:56.204 { 00:10:56.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.204 "dma_device_type": 2 00:10:56.204 }, 00:10:56.204 { 00:10:56.204 "dma_device_id": "system", 00:10:56.204 "dma_device_type": 1 00:10:56.204 }, 00:10:56.204 { 00:10:56.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.204 "dma_device_type": 2 00:10:56.204 }, 00:10:56.204 { 00:10:56.204 "dma_device_id": "system", 00:10:56.204 "dma_device_type": 1 00:10:56.204 }, 00:10:56.204 { 00:10:56.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.204 "dma_device_type": 2 00:10:56.204 } 00:10:56.204 ], 00:10:56.204 "driver_specific": { 00:10:56.204 "raid": { 00:10:56.204 "uuid": "79275b4e-c103-49e7-b984-2b2f19e7b380", 00:10:56.204 "strip_size_kb": 64, 00:10:56.204 "state": "online", 00:10:56.204 "raid_level": "raid0", 00:10:56.204 "superblock": true, 00:10:56.204 "num_base_bdevs": 4, 00:10:56.204 "num_base_bdevs_discovered": 4, 00:10:56.204 "num_base_bdevs_operational": 4, 00:10:56.204 "base_bdevs_list": [ 00:10:56.204 { 00:10:56.204 "name": "BaseBdev1", 00:10:56.204 "uuid": "1cd66f03-5603-4a3b-914e-b7e2373c7a8c", 00:10:56.204 "is_configured": true, 00:10:56.204 "data_offset": 2048, 00:10:56.204 "data_size": 63488 00:10:56.204 }, 00:10:56.204 { 00:10:56.204 "name": "BaseBdev2", 00:10:56.204 "uuid": "8f522f79-9c6a-4ef9-a531-f4b2d5620e22", 00:10:56.204 "is_configured": true, 00:10:56.204 "data_offset": 2048, 00:10:56.204 "data_size": 63488 00:10:56.204 }, 00:10:56.204 { 00:10:56.204 "name": "BaseBdev3", 00:10:56.204 "uuid": "e0ab4c75-de68-4629-a55c-26d67e6e2e71", 00:10:56.204 "is_configured": true, 00:10:56.204 "data_offset": 2048, 00:10:56.204 "data_size": 63488 00:10:56.204 }, 00:10:56.204 { 00:10:56.204 "name": "BaseBdev4", 00:10:56.204 "uuid": "cda16ef4-7c33-40eb-af63-5fe83e075409", 00:10:56.204 "is_configured": true, 00:10:56.204 "data_offset": 2048, 00:10:56.204 "data_size": 63488 00:10:56.204 } 00:10:56.204 ] 00:10:56.204 } 00:10:56.204 } 00:10:56.204 }' 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.204 BaseBdev2 00:10:56.204 BaseBdev3 00:10:56.204 BaseBdev4' 00:10:56.204 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.462 09:55:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.462 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.462 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.462 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.462 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.462 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.462 [2024-10-21 09:55:33.014650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.462 [2024-10-21 09:55:33.014690] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.462 [2024-10-21 09:55:33.014750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.719 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.719 "name": "Existed_Raid", 00:10:56.719 "uuid": "79275b4e-c103-49e7-b984-2b2f19e7b380", 00:10:56.719 "strip_size_kb": 64, 00:10:56.719 "state": "offline", 00:10:56.719 "raid_level": "raid0", 00:10:56.719 "superblock": true, 00:10:56.719 "num_base_bdevs": 4, 00:10:56.719 "num_base_bdevs_discovered": 3, 00:10:56.719 "num_base_bdevs_operational": 3, 00:10:56.719 "base_bdevs_list": [ 00:10:56.719 { 00:10:56.719 "name": null, 00:10:56.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.719 "is_configured": false, 00:10:56.719 "data_offset": 0, 00:10:56.719 "data_size": 63488 00:10:56.719 }, 00:10:56.719 { 00:10:56.719 "name": "BaseBdev2", 00:10:56.719 "uuid": "8f522f79-9c6a-4ef9-a531-f4b2d5620e22", 00:10:56.720 "is_configured": true, 00:10:56.720 "data_offset": 2048, 00:10:56.720 "data_size": 63488 00:10:56.720 }, 00:10:56.720 { 00:10:56.720 "name": "BaseBdev3", 00:10:56.720 "uuid": "e0ab4c75-de68-4629-a55c-26d67e6e2e71", 00:10:56.720 "is_configured": true, 00:10:56.720 "data_offset": 2048, 00:10:56.720 "data_size": 63488 00:10:56.720 }, 00:10:56.720 { 00:10:56.720 "name": "BaseBdev4", 00:10:56.720 "uuid": "cda16ef4-7c33-40eb-af63-5fe83e075409", 00:10:56.720 "is_configured": true, 00:10:56.720 "data_offset": 2048, 00:10:56.720 "data_size": 63488 00:10:56.720 } 00:10:56.720 ] 00:10:56.720 }' 00:10:56.720 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.720 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.979 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:56.979 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.980 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.980 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.980 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.980 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.239 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.240 [2024-10-21 09:55:33.609349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.240 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.240 [2024-10-21 09:55:33.777199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:57.499 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.499 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.499 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.500 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.500 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.500 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.500 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.500 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.500 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.500 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.500 09:55:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:57.500 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.500 09:55:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.500 [2024-10-21 09:55:33.937395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:57.500 [2024-10-21 09:55:33.937468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.500 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 BaseBdev2 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 [ 00:10:57.760 { 00:10:57.760 "name": "BaseBdev2", 00:10:57.760 "aliases": [ 00:10:57.760 "010a4d5d-afe1-4a84-b0c1-e2b0c283128b" 00:10:57.760 ], 00:10:57.760 "product_name": "Malloc disk", 00:10:57.760 "block_size": 512, 00:10:57.760 "num_blocks": 65536, 00:10:57.760 "uuid": "010a4d5d-afe1-4a84-b0c1-e2b0c283128b", 00:10:57.760 "assigned_rate_limits": { 00:10:57.760 "rw_ios_per_sec": 0, 00:10:57.760 "rw_mbytes_per_sec": 0, 00:10:57.760 "r_mbytes_per_sec": 0, 00:10:57.760 "w_mbytes_per_sec": 0 00:10:57.760 }, 00:10:57.760 "claimed": false, 00:10:57.760 "zoned": false, 00:10:57.760 "supported_io_types": { 00:10:57.760 "read": true, 00:10:57.760 "write": true, 00:10:57.760 "unmap": true, 00:10:57.760 "flush": true, 00:10:57.760 "reset": true, 00:10:57.760 "nvme_admin": false, 00:10:57.760 "nvme_io": false, 00:10:57.760 "nvme_io_md": false, 00:10:57.760 "write_zeroes": true, 00:10:57.760 "zcopy": true, 00:10:57.760 "get_zone_info": false, 00:10:57.760 "zone_management": false, 00:10:57.760 "zone_append": false, 00:10:57.760 "compare": false, 00:10:57.760 "compare_and_write": false, 00:10:57.760 "abort": true, 00:10:57.760 "seek_hole": false, 00:10:57.760 "seek_data": false, 00:10:57.760 "copy": true, 00:10:57.760 "nvme_iov_md": false 00:10:57.760 }, 00:10:57.760 "memory_domains": [ 00:10:57.760 { 00:10:57.760 "dma_device_id": "system", 00:10:57.760 "dma_device_type": 1 00:10:57.760 }, 00:10:57.760 { 00:10:57.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.760 "dma_device_type": 2 00:10:57.760 } 00:10:57.760 ], 00:10:57.760 "driver_specific": {} 00:10:57.760 } 00:10:57.760 ] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 BaseBdev3 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 [ 00:10:57.760 { 00:10:57.760 "name": "BaseBdev3", 00:10:57.760 "aliases": [ 00:10:57.760 "74dfa300-924d-4379-94b3-f6cff684b869" 00:10:57.760 ], 00:10:57.760 "product_name": "Malloc disk", 00:10:57.760 "block_size": 512, 00:10:57.760 "num_blocks": 65536, 00:10:57.760 "uuid": "74dfa300-924d-4379-94b3-f6cff684b869", 00:10:57.760 "assigned_rate_limits": { 00:10:57.760 "rw_ios_per_sec": 0, 00:10:57.760 "rw_mbytes_per_sec": 0, 00:10:57.760 "r_mbytes_per_sec": 0, 00:10:57.760 "w_mbytes_per_sec": 0 00:10:57.760 }, 00:10:57.760 "claimed": false, 00:10:57.760 "zoned": false, 00:10:57.760 "supported_io_types": { 00:10:57.760 "read": true, 00:10:57.760 "write": true, 00:10:57.760 "unmap": true, 00:10:57.760 "flush": true, 00:10:57.760 "reset": true, 00:10:57.760 "nvme_admin": false, 00:10:57.760 "nvme_io": false, 00:10:57.760 "nvme_io_md": false, 00:10:57.760 "write_zeroes": true, 00:10:57.760 "zcopy": true, 00:10:57.760 "get_zone_info": false, 00:10:57.760 "zone_management": false, 00:10:57.760 "zone_append": false, 00:10:57.760 "compare": false, 00:10:57.760 "compare_and_write": false, 00:10:57.760 "abort": true, 00:10:57.760 "seek_hole": false, 00:10:57.760 "seek_data": false, 00:10:57.760 "copy": true, 00:10:57.760 "nvme_iov_md": false 00:10:57.760 }, 00:10:57.760 "memory_domains": [ 00:10:57.760 { 00:10:57.760 "dma_device_id": "system", 00:10:57.760 "dma_device_type": 1 00:10:57.760 }, 00:10:57.760 { 00:10:57.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.760 "dma_device_type": 2 00:10:57.760 } 00:10:57.760 ], 00:10:57.760 "driver_specific": {} 00:10:57.760 } 00:10:57.760 ] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 BaseBdev4 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.760 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.760 [ 00:10:57.760 { 00:10:57.760 "name": "BaseBdev4", 00:10:57.760 "aliases": [ 00:10:57.760 "2f2f0e76-0689-4182-99fd-09c97b6c1b7f" 00:10:57.760 ], 00:10:57.760 "product_name": "Malloc disk", 00:10:57.760 "block_size": 512, 00:10:57.760 "num_blocks": 65536, 00:10:57.760 "uuid": "2f2f0e76-0689-4182-99fd-09c97b6c1b7f", 00:10:57.761 "assigned_rate_limits": { 00:10:57.761 "rw_ios_per_sec": 0, 00:10:57.761 "rw_mbytes_per_sec": 0, 00:10:57.761 "r_mbytes_per_sec": 0, 00:10:57.761 "w_mbytes_per_sec": 0 00:10:57.761 }, 00:10:57.761 "claimed": false, 00:10:57.761 "zoned": false, 00:10:57.761 "supported_io_types": { 00:10:57.761 "read": true, 00:10:57.761 "write": true, 00:10:57.761 "unmap": true, 00:10:57.761 "flush": true, 00:10:57.761 "reset": true, 00:10:57.761 "nvme_admin": false, 00:10:57.761 "nvme_io": false, 00:10:57.761 "nvme_io_md": false, 00:10:57.761 "write_zeroes": true, 00:10:57.761 "zcopy": true, 00:10:57.761 "get_zone_info": false, 00:10:57.761 "zone_management": false, 00:10:58.020 "zone_append": false, 00:10:58.020 "compare": false, 00:10:58.020 "compare_and_write": false, 00:10:58.020 "abort": true, 00:10:58.020 "seek_hole": false, 00:10:58.020 "seek_data": false, 00:10:58.020 "copy": true, 00:10:58.020 "nvme_iov_md": false 00:10:58.020 }, 00:10:58.020 "memory_domains": [ 00:10:58.020 { 00:10:58.020 "dma_device_id": "system", 00:10:58.020 "dma_device_type": 1 00:10:58.020 }, 00:10:58.020 { 00:10:58.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.020 "dma_device_type": 2 00:10:58.020 } 00:10:58.020 ], 00:10:58.020 "driver_specific": {} 00:10:58.020 } 00:10:58.020 ] 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.020 [2024-10-21 09:55:34.361940] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.020 [2024-10-21 09:55:34.361992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.020 [2024-10-21 09:55:34.362020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.020 [2024-10-21 09:55:34.364135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.020 [2024-10-21 09:55:34.364204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.020 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.020 "name": "Existed_Raid", 00:10:58.020 "uuid": "bd1ebb6d-71cc-4119-9aa1-146d2436c215", 00:10:58.020 "strip_size_kb": 64, 00:10:58.020 "state": "configuring", 00:10:58.020 "raid_level": "raid0", 00:10:58.021 "superblock": true, 00:10:58.021 "num_base_bdevs": 4, 00:10:58.021 "num_base_bdevs_discovered": 3, 00:10:58.021 "num_base_bdevs_operational": 4, 00:10:58.021 "base_bdevs_list": [ 00:10:58.021 { 00:10:58.021 "name": "BaseBdev1", 00:10:58.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.021 "is_configured": false, 00:10:58.021 "data_offset": 0, 00:10:58.021 "data_size": 0 00:10:58.021 }, 00:10:58.021 { 00:10:58.021 "name": "BaseBdev2", 00:10:58.021 "uuid": "010a4d5d-afe1-4a84-b0c1-e2b0c283128b", 00:10:58.021 "is_configured": true, 00:10:58.021 "data_offset": 2048, 00:10:58.021 "data_size": 63488 00:10:58.021 }, 00:10:58.021 { 00:10:58.021 "name": "BaseBdev3", 00:10:58.021 "uuid": "74dfa300-924d-4379-94b3-f6cff684b869", 00:10:58.021 "is_configured": true, 00:10:58.021 "data_offset": 2048, 00:10:58.021 "data_size": 63488 00:10:58.021 }, 00:10:58.021 { 00:10:58.021 "name": "BaseBdev4", 00:10:58.021 "uuid": "2f2f0e76-0689-4182-99fd-09c97b6c1b7f", 00:10:58.021 "is_configured": true, 00:10:58.021 "data_offset": 2048, 00:10:58.021 "data_size": 63488 00:10:58.021 } 00:10:58.021 ] 00:10:58.021 }' 00:10:58.021 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.021 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.280 [2024-10-21 09:55:34.805232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.280 "name": "Existed_Raid", 00:10:58.280 "uuid": "bd1ebb6d-71cc-4119-9aa1-146d2436c215", 00:10:58.280 "strip_size_kb": 64, 00:10:58.280 "state": "configuring", 00:10:58.280 "raid_level": "raid0", 00:10:58.280 "superblock": true, 00:10:58.280 "num_base_bdevs": 4, 00:10:58.280 "num_base_bdevs_discovered": 2, 00:10:58.280 "num_base_bdevs_operational": 4, 00:10:58.280 "base_bdevs_list": [ 00:10:58.280 { 00:10:58.280 "name": "BaseBdev1", 00:10:58.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.280 "is_configured": false, 00:10:58.280 "data_offset": 0, 00:10:58.280 "data_size": 0 00:10:58.280 }, 00:10:58.280 { 00:10:58.280 "name": null, 00:10:58.280 "uuid": "010a4d5d-afe1-4a84-b0c1-e2b0c283128b", 00:10:58.280 "is_configured": false, 00:10:58.280 "data_offset": 0, 00:10:58.280 "data_size": 63488 00:10:58.280 }, 00:10:58.280 { 00:10:58.280 "name": "BaseBdev3", 00:10:58.280 "uuid": "74dfa300-924d-4379-94b3-f6cff684b869", 00:10:58.280 "is_configured": true, 00:10:58.280 "data_offset": 2048, 00:10:58.280 "data_size": 63488 00:10:58.280 }, 00:10:58.280 { 00:10:58.280 "name": "BaseBdev4", 00:10:58.280 "uuid": "2f2f0e76-0689-4182-99fd-09c97b6c1b7f", 00:10:58.280 "is_configured": true, 00:10:58.280 "data_offset": 2048, 00:10:58.280 "data_size": 63488 00:10:58.280 } 00:10:58.280 ] 00:10:58.280 }' 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.280 09:55:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.848 [2024-10-21 09:55:35.292808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.848 BaseBdev1 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.848 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.848 [ 00:10:58.848 { 00:10:58.848 "name": "BaseBdev1", 00:10:58.848 "aliases": [ 00:10:58.848 "8d63f342-258a-4c18-a622-50289ea8cd46" 00:10:58.848 ], 00:10:58.848 "product_name": "Malloc disk", 00:10:58.848 "block_size": 512, 00:10:58.848 "num_blocks": 65536, 00:10:58.848 "uuid": "8d63f342-258a-4c18-a622-50289ea8cd46", 00:10:58.848 "assigned_rate_limits": { 00:10:58.848 "rw_ios_per_sec": 0, 00:10:58.848 "rw_mbytes_per_sec": 0, 00:10:58.848 "r_mbytes_per_sec": 0, 00:10:58.848 "w_mbytes_per_sec": 0 00:10:58.848 }, 00:10:58.848 "claimed": true, 00:10:58.848 "claim_type": "exclusive_write", 00:10:58.848 "zoned": false, 00:10:58.848 "supported_io_types": { 00:10:58.848 "read": true, 00:10:58.848 "write": true, 00:10:58.848 "unmap": true, 00:10:58.848 "flush": true, 00:10:58.848 "reset": true, 00:10:58.848 "nvme_admin": false, 00:10:58.848 "nvme_io": false, 00:10:58.848 "nvme_io_md": false, 00:10:58.849 "write_zeroes": true, 00:10:58.849 "zcopy": true, 00:10:58.849 "get_zone_info": false, 00:10:58.849 "zone_management": false, 00:10:58.849 "zone_append": false, 00:10:58.849 "compare": false, 00:10:58.849 "compare_and_write": false, 00:10:58.849 "abort": true, 00:10:58.849 "seek_hole": false, 00:10:58.849 "seek_data": false, 00:10:58.849 "copy": true, 00:10:58.849 "nvme_iov_md": false 00:10:58.849 }, 00:10:58.849 "memory_domains": [ 00:10:58.849 { 00:10:58.849 "dma_device_id": "system", 00:10:58.849 "dma_device_type": 1 00:10:58.849 }, 00:10:58.849 { 00:10:58.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.849 "dma_device_type": 2 00:10:58.849 } 00:10:58.849 ], 00:10:58.849 "driver_specific": {} 00:10:58.849 } 00:10:58.849 ] 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.849 "name": "Existed_Raid", 00:10:58.849 "uuid": "bd1ebb6d-71cc-4119-9aa1-146d2436c215", 00:10:58.849 "strip_size_kb": 64, 00:10:58.849 "state": "configuring", 00:10:58.849 "raid_level": "raid0", 00:10:58.849 "superblock": true, 00:10:58.849 "num_base_bdevs": 4, 00:10:58.849 "num_base_bdevs_discovered": 3, 00:10:58.849 "num_base_bdevs_operational": 4, 00:10:58.849 "base_bdevs_list": [ 00:10:58.849 { 00:10:58.849 "name": "BaseBdev1", 00:10:58.849 "uuid": "8d63f342-258a-4c18-a622-50289ea8cd46", 00:10:58.849 "is_configured": true, 00:10:58.849 "data_offset": 2048, 00:10:58.849 "data_size": 63488 00:10:58.849 }, 00:10:58.849 { 00:10:58.849 "name": null, 00:10:58.849 "uuid": "010a4d5d-afe1-4a84-b0c1-e2b0c283128b", 00:10:58.849 "is_configured": false, 00:10:58.849 "data_offset": 0, 00:10:58.849 "data_size": 63488 00:10:58.849 }, 00:10:58.849 { 00:10:58.849 "name": "BaseBdev3", 00:10:58.849 "uuid": "74dfa300-924d-4379-94b3-f6cff684b869", 00:10:58.849 "is_configured": true, 00:10:58.849 "data_offset": 2048, 00:10:58.849 "data_size": 63488 00:10:58.849 }, 00:10:58.849 { 00:10:58.849 "name": "BaseBdev4", 00:10:58.849 "uuid": "2f2f0e76-0689-4182-99fd-09c97b6c1b7f", 00:10:58.849 "is_configured": true, 00:10:58.849 "data_offset": 2048, 00:10:58.849 "data_size": 63488 00:10:58.849 } 00:10:58.849 ] 00:10:58.849 }' 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.849 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.419 [2024-10-21 09:55:35.800082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.419 "name": "Existed_Raid", 00:10:59.419 "uuid": "bd1ebb6d-71cc-4119-9aa1-146d2436c215", 00:10:59.419 "strip_size_kb": 64, 00:10:59.419 "state": "configuring", 00:10:59.419 "raid_level": "raid0", 00:10:59.419 "superblock": true, 00:10:59.419 "num_base_bdevs": 4, 00:10:59.419 "num_base_bdevs_discovered": 2, 00:10:59.419 "num_base_bdevs_operational": 4, 00:10:59.419 "base_bdevs_list": [ 00:10:59.419 { 00:10:59.419 "name": "BaseBdev1", 00:10:59.419 "uuid": "8d63f342-258a-4c18-a622-50289ea8cd46", 00:10:59.419 "is_configured": true, 00:10:59.419 "data_offset": 2048, 00:10:59.419 "data_size": 63488 00:10:59.419 }, 00:10:59.419 { 00:10:59.419 "name": null, 00:10:59.419 "uuid": "010a4d5d-afe1-4a84-b0c1-e2b0c283128b", 00:10:59.419 "is_configured": false, 00:10:59.419 "data_offset": 0, 00:10:59.419 "data_size": 63488 00:10:59.419 }, 00:10:59.419 { 00:10:59.419 "name": null, 00:10:59.419 "uuid": "74dfa300-924d-4379-94b3-f6cff684b869", 00:10:59.419 "is_configured": false, 00:10:59.419 "data_offset": 0, 00:10:59.419 "data_size": 63488 00:10:59.419 }, 00:10:59.419 { 00:10:59.419 "name": "BaseBdev4", 00:10:59.419 "uuid": "2f2f0e76-0689-4182-99fd-09c97b6c1b7f", 00:10:59.419 "is_configured": true, 00:10:59.419 "data_offset": 2048, 00:10:59.419 "data_size": 63488 00:10:59.419 } 00:10:59.419 ] 00:10:59.419 }' 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.419 09:55:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.680 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.680 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.680 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.680 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.680 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.940 [2024-10-21 09:55:36.307231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.940 "name": "Existed_Raid", 00:10:59.940 "uuid": "bd1ebb6d-71cc-4119-9aa1-146d2436c215", 00:10:59.940 "strip_size_kb": 64, 00:10:59.940 "state": "configuring", 00:10:59.940 "raid_level": "raid0", 00:10:59.940 "superblock": true, 00:10:59.940 "num_base_bdevs": 4, 00:10:59.940 "num_base_bdevs_discovered": 3, 00:10:59.940 "num_base_bdevs_operational": 4, 00:10:59.940 "base_bdevs_list": [ 00:10:59.940 { 00:10:59.940 "name": "BaseBdev1", 00:10:59.940 "uuid": "8d63f342-258a-4c18-a622-50289ea8cd46", 00:10:59.940 "is_configured": true, 00:10:59.940 "data_offset": 2048, 00:10:59.940 "data_size": 63488 00:10:59.940 }, 00:10:59.940 { 00:10:59.940 "name": null, 00:10:59.940 "uuid": "010a4d5d-afe1-4a84-b0c1-e2b0c283128b", 00:10:59.940 "is_configured": false, 00:10:59.940 "data_offset": 0, 00:10:59.940 "data_size": 63488 00:10:59.940 }, 00:10:59.940 { 00:10:59.940 "name": "BaseBdev3", 00:10:59.940 "uuid": "74dfa300-924d-4379-94b3-f6cff684b869", 00:10:59.940 "is_configured": true, 00:10:59.940 "data_offset": 2048, 00:10:59.940 "data_size": 63488 00:10:59.940 }, 00:10:59.940 { 00:10:59.940 "name": "BaseBdev4", 00:10:59.940 "uuid": "2f2f0e76-0689-4182-99fd-09c97b6c1b7f", 00:10:59.940 "is_configured": true, 00:10:59.940 "data_offset": 2048, 00:10:59.940 "data_size": 63488 00:10:59.940 } 00:10:59.940 ] 00:10:59.940 }' 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.940 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.202 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.202 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.202 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.202 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.202 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.202 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:00.202 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.202 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.202 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.202 [2024-10-21 09:55:36.794792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.464 "name": "Existed_Raid", 00:11:00.464 "uuid": "bd1ebb6d-71cc-4119-9aa1-146d2436c215", 00:11:00.464 "strip_size_kb": 64, 00:11:00.464 "state": "configuring", 00:11:00.464 "raid_level": "raid0", 00:11:00.464 "superblock": true, 00:11:00.464 "num_base_bdevs": 4, 00:11:00.464 "num_base_bdevs_discovered": 2, 00:11:00.464 "num_base_bdevs_operational": 4, 00:11:00.464 "base_bdevs_list": [ 00:11:00.464 { 00:11:00.464 "name": null, 00:11:00.464 "uuid": "8d63f342-258a-4c18-a622-50289ea8cd46", 00:11:00.464 "is_configured": false, 00:11:00.464 "data_offset": 0, 00:11:00.464 "data_size": 63488 00:11:00.464 }, 00:11:00.464 { 00:11:00.464 "name": null, 00:11:00.464 "uuid": "010a4d5d-afe1-4a84-b0c1-e2b0c283128b", 00:11:00.464 "is_configured": false, 00:11:00.464 "data_offset": 0, 00:11:00.464 "data_size": 63488 00:11:00.464 }, 00:11:00.464 { 00:11:00.464 "name": "BaseBdev3", 00:11:00.464 "uuid": "74dfa300-924d-4379-94b3-f6cff684b869", 00:11:00.464 "is_configured": true, 00:11:00.464 "data_offset": 2048, 00:11:00.464 "data_size": 63488 00:11:00.464 }, 00:11:00.464 { 00:11:00.464 "name": "BaseBdev4", 00:11:00.464 "uuid": "2f2f0e76-0689-4182-99fd-09c97b6c1b7f", 00:11:00.464 "is_configured": true, 00:11:00.464 "data_offset": 2048, 00:11:00.464 "data_size": 63488 00:11:00.464 } 00:11:00.464 ] 00:11:00.464 }' 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.464 09:55:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.032 [2024-10-21 09:55:37.371978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.032 "name": "Existed_Raid", 00:11:01.032 "uuid": "bd1ebb6d-71cc-4119-9aa1-146d2436c215", 00:11:01.032 "strip_size_kb": 64, 00:11:01.032 "state": "configuring", 00:11:01.032 "raid_level": "raid0", 00:11:01.032 "superblock": true, 00:11:01.032 "num_base_bdevs": 4, 00:11:01.032 "num_base_bdevs_discovered": 3, 00:11:01.032 "num_base_bdevs_operational": 4, 00:11:01.032 "base_bdevs_list": [ 00:11:01.032 { 00:11:01.032 "name": null, 00:11:01.032 "uuid": "8d63f342-258a-4c18-a622-50289ea8cd46", 00:11:01.032 "is_configured": false, 00:11:01.032 "data_offset": 0, 00:11:01.032 "data_size": 63488 00:11:01.032 }, 00:11:01.032 { 00:11:01.032 "name": "BaseBdev2", 00:11:01.032 "uuid": "010a4d5d-afe1-4a84-b0c1-e2b0c283128b", 00:11:01.032 "is_configured": true, 00:11:01.032 "data_offset": 2048, 00:11:01.032 "data_size": 63488 00:11:01.032 }, 00:11:01.032 { 00:11:01.032 "name": "BaseBdev3", 00:11:01.032 "uuid": "74dfa300-924d-4379-94b3-f6cff684b869", 00:11:01.032 "is_configured": true, 00:11:01.032 "data_offset": 2048, 00:11:01.032 "data_size": 63488 00:11:01.032 }, 00:11:01.032 { 00:11:01.032 "name": "BaseBdev4", 00:11:01.032 "uuid": "2f2f0e76-0689-4182-99fd-09c97b6c1b7f", 00:11:01.032 "is_configured": true, 00:11:01.032 "data_offset": 2048, 00:11:01.032 "data_size": 63488 00:11:01.032 } 00:11:01.032 ] 00:11:01.032 }' 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.032 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.292 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.292 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:01.292 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.292 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.292 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.292 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:01.292 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:01.292 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.292 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.292 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8d63f342-258a-4c18-a622-50289ea8cd46 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.551 [2024-10-21 09:55:37.970220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:01.551 [2024-10-21 09:55:37.970500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:01.551 [2024-10-21 09:55:37.970516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:01.551 [2024-10-21 09:55:37.970855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:01.551 [2024-10-21 09:55:37.971066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:01.551 [2024-10-21 09:55:37.971092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:11:01.551 NewBaseBdev 00:11:01.551 [2024-10-21 09:55:37.971253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.551 09:55:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.551 [ 00:11:01.551 { 00:11:01.551 "name": "NewBaseBdev", 00:11:01.551 "aliases": [ 00:11:01.551 "8d63f342-258a-4c18-a622-50289ea8cd46" 00:11:01.551 ], 00:11:01.551 "product_name": "Malloc disk", 00:11:01.551 "block_size": 512, 00:11:01.551 "num_blocks": 65536, 00:11:01.551 "uuid": "8d63f342-258a-4c18-a622-50289ea8cd46", 00:11:01.551 "assigned_rate_limits": { 00:11:01.551 "rw_ios_per_sec": 0, 00:11:01.551 "rw_mbytes_per_sec": 0, 00:11:01.551 "r_mbytes_per_sec": 0, 00:11:01.551 "w_mbytes_per_sec": 0 00:11:01.551 }, 00:11:01.551 "claimed": true, 00:11:01.551 "claim_type": "exclusive_write", 00:11:01.551 "zoned": false, 00:11:01.551 "supported_io_types": { 00:11:01.551 "read": true, 00:11:01.551 "write": true, 00:11:01.551 "unmap": true, 00:11:01.551 "flush": true, 00:11:01.551 "reset": true, 00:11:01.551 "nvme_admin": false, 00:11:01.551 "nvme_io": false, 00:11:01.551 "nvme_io_md": false, 00:11:01.551 "write_zeroes": true, 00:11:01.551 "zcopy": true, 00:11:01.551 "get_zone_info": false, 00:11:01.551 "zone_management": false, 00:11:01.551 "zone_append": false, 00:11:01.551 "compare": false, 00:11:01.551 "compare_and_write": false, 00:11:01.551 "abort": true, 00:11:01.551 "seek_hole": false, 00:11:01.551 "seek_data": false, 00:11:01.551 "copy": true, 00:11:01.551 "nvme_iov_md": false 00:11:01.551 }, 00:11:01.551 "memory_domains": [ 00:11:01.551 { 00:11:01.551 "dma_device_id": "system", 00:11:01.551 "dma_device_type": 1 00:11:01.551 }, 00:11:01.551 { 00:11:01.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.551 "dma_device_type": 2 00:11:01.551 } 00:11:01.551 ], 00:11:01.551 "driver_specific": {} 00:11:01.551 } 00:11:01.551 ] 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.551 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.552 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.552 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.552 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.552 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.552 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.552 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.552 "name": "Existed_Raid", 00:11:01.552 "uuid": "bd1ebb6d-71cc-4119-9aa1-146d2436c215", 00:11:01.552 "strip_size_kb": 64, 00:11:01.552 "state": "online", 00:11:01.552 "raid_level": "raid0", 00:11:01.552 "superblock": true, 00:11:01.552 "num_base_bdevs": 4, 00:11:01.552 "num_base_bdevs_discovered": 4, 00:11:01.552 "num_base_bdevs_operational": 4, 00:11:01.552 "base_bdevs_list": [ 00:11:01.552 { 00:11:01.552 "name": "NewBaseBdev", 00:11:01.552 "uuid": "8d63f342-258a-4c18-a622-50289ea8cd46", 00:11:01.552 "is_configured": true, 00:11:01.552 "data_offset": 2048, 00:11:01.552 "data_size": 63488 00:11:01.552 }, 00:11:01.552 { 00:11:01.552 "name": "BaseBdev2", 00:11:01.552 "uuid": "010a4d5d-afe1-4a84-b0c1-e2b0c283128b", 00:11:01.552 "is_configured": true, 00:11:01.552 "data_offset": 2048, 00:11:01.552 "data_size": 63488 00:11:01.552 }, 00:11:01.552 { 00:11:01.552 "name": "BaseBdev3", 00:11:01.552 "uuid": "74dfa300-924d-4379-94b3-f6cff684b869", 00:11:01.552 "is_configured": true, 00:11:01.552 "data_offset": 2048, 00:11:01.552 "data_size": 63488 00:11:01.552 }, 00:11:01.552 { 00:11:01.552 "name": "BaseBdev4", 00:11:01.552 "uuid": "2f2f0e76-0689-4182-99fd-09c97b6c1b7f", 00:11:01.552 "is_configured": true, 00:11:01.552 "data_offset": 2048, 00:11:01.552 "data_size": 63488 00:11:01.552 } 00:11:01.552 ] 00:11:01.552 }' 00:11:01.552 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.552 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.121 [2024-10-21 09:55:38.477822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.121 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.121 "name": "Existed_Raid", 00:11:02.121 "aliases": [ 00:11:02.121 "bd1ebb6d-71cc-4119-9aa1-146d2436c215" 00:11:02.121 ], 00:11:02.121 "product_name": "Raid Volume", 00:11:02.121 "block_size": 512, 00:11:02.121 "num_blocks": 253952, 00:11:02.121 "uuid": "bd1ebb6d-71cc-4119-9aa1-146d2436c215", 00:11:02.121 "assigned_rate_limits": { 00:11:02.121 "rw_ios_per_sec": 0, 00:11:02.121 "rw_mbytes_per_sec": 0, 00:11:02.121 "r_mbytes_per_sec": 0, 00:11:02.121 "w_mbytes_per_sec": 0 00:11:02.121 }, 00:11:02.121 "claimed": false, 00:11:02.121 "zoned": false, 00:11:02.121 "supported_io_types": { 00:11:02.121 "read": true, 00:11:02.121 "write": true, 00:11:02.121 "unmap": true, 00:11:02.121 "flush": true, 00:11:02.121 "reset": true, 00:11:02.121 "nvme_admin": false, 00:11:02.121 "nvme_io": false, 00:11:02.121 "nvme_io_md": false, 00:11:02.121 "write_zeroes": true, 00:11:02.121 "zcopy": false, 00:11:02.121 "get_zone_info": false, 00:11:02.121 "zone_management": false, 00:11:02.121 "zone_append": false, 00:11:02.121 "compare": false, 00:11:02.121 "compare_and_write": false, 00:11:02.121 "abort": false, 00:11:02.121 "seek_hole": false, 00:11:02.121 "seek_data": false, 00:11:02.121 "copy": false, 00:11:02.121 "nvme_iov_md": false 00:11:02.121 }, 00:11:02.121 "memory_domains": [ 00:11:02.121 { 00:11:02.121 "dma_device_id": "system", 00:11:02.121 "dma_device_type": 1 00:11:02.121 }, 00:11:02.121 { 00:11:02.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.121 "dma_device_type": 2 00:11:02.121 }, 00:11:02.121 { 00:11:02.121 "dma_device_id": "system", 00:11:02.121 "dma_device_type": 1 00:11:02.121 }, 00:11:02.121 { 00:11:02.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.121 "dma_device_type": 2 00:11:02.121 }, 00:11:02.121 { 00:11:02.121 "dma_device_id": "system", 00:11:02.121 "dma_device_type": 1 00:11:02.121 }, 00:11:02.121 { 00:11:02.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.121 "dma_device_type": 2 00:11:02.121 }, 00:11:02.121 { 00:11:02.121 "dma_device_id": "system", 00:11:02.122 "dma_device_type": 1 00:11:02.122 }, 00:11:02.122 { 00:11:02.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.122 "dma_device_type": 2 00:11:02.122 } 00:11:02.122 ], 00:11:02.122 "driver_specific": { 00:11:02.122 "raid": { 00:11:02.122 "uuid": "bd1ebb6d-71cc-4119-9aa1-146d2436c215", 00:11:02.122 "strip_size_kb": 64, 00:11:02.122 "state": "online", 00:11:02.122 "raid_level": "raid0", 00:11:02.122 "superblock": true, 00:11:02.122 "num_base_bdevs": 4, 00:11:02.122 "num_base_bdevs_discovered": 4, 00:11:02.122 "num_base_bdevs_operational": 4, 00:11:02.122 "base_bdevs_list": [ 00:11:02.122 { 00:11:02.122 "name": "NewBaseBdev", 00:11:02.122 "uuid": "8d63f342-258a-4c18-a622-50289ea8cd46", 00:11:02.122 "is_configured": true, 00:11:02.122 "data_offset": 2048, 00:11:02.122 "data_size": 63488 00:11:02.122 }, 00:11:02.122 { 00:11:02.122 "name": "BaseBdev2", 00:11:02.122 "uuid": "010a4d5d-afe1-4a84-b0c1-e2b0c283128b", 00:11:02.122 "is_configured": true, 00:11:02.122 "data_offset": 2048, 00:11:02.122 "data_size": 63488 00:11:02.122 }, 00:11:02.122 { 00:11:02.122 "name": "BaseBdev3", 00:11:02.122 "uuid": "74dfa300-924d-4379-94b3-f6cff684b869", 00:11:02.122 "is_configured": true, 00:11:02.122 "data_offset": 2048, 00:11:02.122 "data_size": 63488 00:11:02.122 }, 00:11:02.122 { 00:11:02.122 "name": "BaseBdev4", 00:11:02.122 "uuid": "2f2f0e76-0689-4182-99fd-09c97b6c1b7f", 00:11:02.122 "is_configured": true, 00:11:02.122 "data_offset": 2048, 00:11:02.122 "data_size": 63488 00:11:02.122 } 00:11:02.122 ] 00:11:02.122 } 00:11:02.122 } 00:11:02.122 }' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:02.122 BaseBdev2 00:11:02.122 BaseBdev3 00:11:02.122 BaseBdev4' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.122 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.380 [2024-10-21 09:55:38.716997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.381 [2024-10-21 09:55:38.717042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.381 [2024-10-21 09:55:38.717130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.381 [2024-10-21 09:55:38.717221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.381 [2024-10-21 09:55:38.717241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69626 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 69626 ']' 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 69626 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69626 00:11:02.381 killing process with pid 69626 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69626' 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 69626 00:11:02.381 [2024-10-21 09:55:38.764992] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.381 09:55:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 69626 00:11:02.639 [2024-10-21 09:55:39.194451] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.019 09:55:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:04.019 00:11:04.019 real 0m11.563s 00:11:04.019 user 0m18.180s 00:11:04.019 sys 0m2.022s 00:11:04.019 09:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.019 09:55:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.019 ************************************ 00:11:04.019 END TEST raid_state_function_test_sb 00:11:04.020 ************************************ 00:11:04.020 09:55:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:04.020 09:55:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:04.020 09:55:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.020 09:55:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.020 ************************************ 00:11:04.020 START TEST raid_superblock_test 00:11:04.020 ************************************ 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70296 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70296 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70296 ']' 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.020 09:55:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.020 [2024-10-21 09:55:40.539782] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:04.020 [2024-10-21 09:55:40.539901] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70296 ] 00:11:04.280 [2024-10-21 09:55:40.704785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.280 [2024-10-21 09:55:40.833143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.540 [2024-10-21 09:55:41.059846] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.540 [2024-10-21 09:55:41.059895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.112 malloc1 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.112 [2024-10-21 09:55:41.505249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:05.112 [2024-10-21 09:55:41.505368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.112 [2024-10-21 09:55:41.505402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:11:05.112 [2024-10-21 09:55:41.505414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.112 [2024-10-21 09:55:41.508219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.112 [2024-10-21 09:55:41.508278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:05.112 pt1 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.112 malloc2 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.112 [2024-10-21 09:55:41.570905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.112 [2024-10-21 09:55:41.570995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.112 [2024-10-21 09:55:41.571024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:11:05.112 [2024-10-21 09:55:41.571036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.112 [2024-10-21 09:55:41.573676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.112 [2024-10-21 09:55:41.573731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:05.112 pt2 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.112 malloc3 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.112 [2024-10-21 09:55:41.657346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:05.112 [2024-10-21 09:55:41.657436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.112 [2024-10-21 09:55:41.657463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:05.112 [2024-10-21 09:55:41.657475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.112 [2024-10-21 09:55:41.660076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.112 [2024-10-21 09:55:41.660127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:05.112 pt3 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.112 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.373 malloc4 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.373 [2024-10-21 09:55:41.719163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:05.373 [2024-10-21 09:55:41.719247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.373 [2024-10-21 09:55:41.719273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:05.373 [2024-10-21 09:55:41.719285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.373 [2024-10-21 09:55:41.721806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.373 [2024-10-21 09:55:41.721856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:05.373 pt4 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.373 [2024-10-21 09:55:41.731228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:05.373 [2024-10-21 09:55:41.733390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.373 [2024-10-21 09:55:41.733490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:05.373 [2024-10-21 09:55:41.733562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:05.373 [2024-10-21 09:55:41.733789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:11:05.373 [2024-10-21 09:55:41.733803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:05.373 [2024-10-21 09:55:41.734146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:05.373 [2024-10-21 09:55:41.734363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:11:05.373 [2024-10-21 09:55:41.734379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:11:05.373 [2024-10-21 09:55:41.734621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.373 "name": "raid_bdev1", 00:11:05.373 "uuid": "09b329a8-8ad4-4ba7-9a65-936769cdf9e0", 00:11:05.373 "strip_size_kb": 64, 00:11:05.373 "state": "online", 00:11:05.373 "raid_level": "raid0", 00:11:05.373 "superblock": true, 00:11:05.373 "num_base_bdevs": 4, 00:11:05.373 "num_base_bdevs_discovered": 4, 00:11:05.373 "num_base_bdevs_operational": 4, 00:11:05.373 "base_bdevs_list": [ 00:11:05.373 { 00:11:05.373 "name": "pt1", 00:11:05.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.373 "is_configured": true, 00:11:05.373 "data_offset": 2048, 00:11:05.373 "data_size": 63488 00:11:05.373 }, 00:11:05.373 { 00:11:05.373 "name": "pt2", 00:11:05.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.373 "is_configured": true, 00:11:05.373 "data_offset": 2048, 00:11:05.373 "data_size": 63488 00:11:05.373 }, 00:11:05.373 { 00:11:05.373 "name": "pt3", 00:11:05.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.373 "is_configured": true, 00:11:05.373 "data_offset": 2048, 00:11:05.373 "data_size": 63488 00:11:05.373 }, 00:11:05.373 { 00:11:05.373 "name": "pt4", 00:11:05.373 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.373 "is_configured": true, 00:11:05.373 "data_offset": 2048, 00:11:05.373 "data_size": 63488 00:11:05.373 } 00:11:05.373 ] 00:11:05.373 }' 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.373 09:55:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.632 [2024-10-21 09:55:42.199100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.632 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.893 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.893 "name": "raid_bdev1", 00:11:05.893 "aliases": [ 00:11:05.893 "09b329a8-8ad4-4ba7-9a65-936769cdf9e0" 00:11:05.893 ], 00:11:05.893 "product_name": "Raid Volume", 00:11:05.893 "block_size": 512, 00:11:05.893 "num_blocks": 253952, 00:11:05.893 "uuid": "09b329a8-8ad4-4ba7-9a65-936769cdf9e0", 00:11:05.893 "assigned_rate_limits": { 00:11:05.893 "rw_ios_per_sec": 0, 00:11:05.893 "rw_mbytes_per_sec": 0, 00:11:05.893 "r_mbytes_per_sec": 0, 00:11:05.893 "w_mbytes_per_sec": 0 00:11:05.893 }, 00:11:05.893 "claimed": false, 00:11:05.893 "zoned": false, 00:11:05.893 "supported_io_types": { 00:11:05.893 "read": true, 00:11:05.893 "write": true, 00:11:05.893 "unmap": true, 00:11:05.893 "flush": true, 00:11:05.893 "reset": true, 00:11:05.893 "nvme_admin": false, 00:11:05.893 "nvme_io": false, 00:11:05.893 "nvme_io_md": false, 00:11:05.893 "write_zeroes": true, 00:11:05.893 "zcopy": false, 00:11:05.893 "get_zone_info": false, 00:11:05.893 "zone_management": false, 00:11:05.893 "zone_append": false, 00:11:05.893 "compare": false, 00:11:05.893 "compare_and_write": false, 00:11:05.893 "abort": false, 00:11:05.893 "seek_hole": false, 00:11:05.893 "seek_data": false, 00:11:05.893 "copy": false, 00:11:05.893 "nvme_iov_md": false 00:11:05.893 }, 00:11:05.893 "memory_domains": [ 00:11:05.893 { 00:11:05.893 "dma_device_id": "system", 00:11:05.893 "dma_device_type": 1 00:11:05.893 }, 00:11:05.893 { 00:11:05.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.893 "dma_device_type": 2 00:11:05.893 }, 00:11:05.893 { 00:11:05.893 "dma_device_id": "system", 00:11:05.893 "dma_device_type": 1 00:11:05.893 }, 00:11:05.893 { 00:11:05.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.893 "dma_device_type": 2 00:11:05.893 }, 00:11:05.893 { 00:11:05.893 "dma_device_id": "system", 00:11:05.893 "dma_device_type": 1 00:11:05.893 }, 00:11:05.893 { 00:11:05.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.893 "dma_device_type": 2 00:11:05.893 }, 00:11:05.893 { 00:11:05.893 "dma_device_id": "system", 00:11:05.893 "dma_device_type": 1 00:11:05.893 }, 00:11:05.893 { 00:11:05.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.893 "dma_device_type": 2 00:11:05.893 } 00:11:05.893 ], 00:11:05.893 "driver_specific": { 00:11:05.893 "raid": { 00:11:05.893 "uuid": "09b329a8-8ad4-4ba7-9a65-936769cdf9e0", 00:11:05.893 "strip_size_kb": 64, 00:11:05.893 "state": "online", 00:11:05.893 "raid_level": "raid0", 00:11:05.893 "superblock": true, 00:11:05.893 "num_base_bdevs": 4, 00:11:05.893 "num_base_bdevs_discovered": 4, 00:11:05.893 "num_base_bdevs_operational": 4, 00:11:05.893 "base_bdevs_list": [ 00:11:05.893 { 00:11:05.893 "name": "pt1", 00:11:05.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.893 "is_configured": true, 00:11:05.893 "data_offset": 2048, 00:11:05.893 "data_size": 63488 00:11:05.893 }, 00:11:05.893 { 00:11:05.893 "name": "pt2", 00:11:05.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.893 "is_configured": true, 00:11:05.893 "data_offset": 2048, 00:11:05.893 "data_size": 63488 00:11:05.893 }, 00:11:05.893 { 00:11:05.893 "name": "pt3", 00:11:05.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.893 "is_configured": true, 00:11:05.893 "data_offset": 2048, 00:11:05.893 "data_size": 63488 00:11:05.893 }, 00:11:05.893 { 00:11:05.893 "name": "pt4", 00:11:05.893 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.893 "is_configured": true, 00:11:05.893 "data_offset": 2048, 00:11:05.893 "data_size": 63488 00:11:05.893 } 00:11:05.893 ] 00:11:05.893 } 00:11:05.893 } 00:11:05.893 }' 00:11:05.893 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.893 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:05.893 pt2 00:11:05.893 pt3 00:11:05.893 pt4' 00:11:05.893 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.893 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.893 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.893 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.894 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.155 [2024-10-21 09:55:42.542360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=09b329a8-8ad4-4ba7-9a65-936769cdf9e0 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 09b329a8-8ad4-4ba7-9a65-936769cdf9e0 ']' 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.155 [2024-10-21 09:55:42.589971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.155 [2024-10-21 09:55:42.590046] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.155 [2024-10-21 09:55:42.590166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.155 [2024-10-21 09:55:42.590309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.155 [2024-10-21 09:55:42.590379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:06.155 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.416 [2024-10-21 09:55:42.757720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:06.416 [2024-10-21 09:55:42.759716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:06.416 [2024-10-21 09:55:42.759816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:06.416 [2024-10-21 09:55:42.759873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:06.416 [2024-10-21 09:55:42.759963] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:06.416 [2024-10-21 09:55:42.760047] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:06.416 [2024-10-21 09:55:42.760068] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:06.416 [2024-10-21 09:55:42.760092] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:06.416 [2024-10-21 09:55:42.760110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.416 [2024-10-21 09:55:42.760123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:11:06.416 request: 00:11:06.416 { 00:11:06.416 "name": "raid_bdev1", 00:11:06.416 "raid_level": "raid0", 00:11:06.416 "base_bdevs": [ 00:11:06.416 "malloc1", 00:11:06.416 "malloc2", 00:11:06.416 "malloc3", 00:11:06.416 "malloc4" 00:11:06.416 ], 00:11:06.416 "strip_size_kb": 64, 00:11:06.416 "superblock": false, 00:11:06.416 "method": "bdev_raid_create", 00:11:06.416 "req_id": 1 00:11:06.416 } 00:11:06.416 Got JSON-RPC error response 00:11:06.416 response: 00:11:06.416 { 00:11:06.416 "code": -17, 00:11:06.416 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:06.416 } 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.416 [2024-10-21 09:55:42.813624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:06.416 [2024-10-21 09:55:42.813758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.416 [2024-10-21 09:55:42.813796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:06.416 [2024-10-21 09:55:42.813848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.416 [2024-10-21 09:55:42.816163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.416 [2024-10-21 09:55:42.816267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:06.416 [2024-10-21 09:55:42.816401] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:06.416 [2024-10-21 09:55:42.816514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:06.416 pt1 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.416 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.416 "name": "raid_bdev1", 00:11:06.416 "uuid": "09b329a8-8ad4-4ba7-9a65-936769cdf9e0", 00:11:06.416 "strip_size_kb": 64, 00:11:06.416 "state": "configuring", 00:11:06.416 "raid_level": "raid0", 00:11:06.416 "superblock": true, 00:11:06.416 "num_base_bdevs": 4, 00:11:06.416 "num_base_bdevs_discovered": 1, 00:11:06.416 "num_base_bdevs_operational": 4, 00:11:06.416 "base_bdevs_list": [ 00:11:06.416 { 00:11:06.416 "name": "pt1", 00:11:06.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.416 "is_configured": true, 00:11:06.416 "data_offset": 2048, 00:11:06.416 "data_size": 63488 00:11:06.416 }, 00:11:06.416 { 00:11:06.416 "name": null, 00:11:06.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.416 "is_configured": false, 00:11:06.417 "data_offset": 2048, 00:11:06.417 "data_size": 63488 00:11:06.417 }, 00:11:06.417 { 00:11:06.417 "name": null, 00:11:06.417 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.417 "is_configured": false, 00:11:06.417 "data_offset": 2048, 00:11:06.417 "data_size": 63488 00:11:06.417 }, 00:11:06.417 { 00:11:06.417 "name": null, 00:11:06.417 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.417 "is_configured": false, 00:11:06.417 "data_offset": 2048, 00:11:06.417 "data_size": 63488 00:11:06.417 } 00:11:06.417 ] 00:11:06.417 }' 00:11:06.417 09:55:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.417 09:55:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.676 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:06.676 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.676 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.676 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.676 [2024-10-21 09:55:43.208915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.676 [2024-10-21 09:55:43.208985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.676 [2024-10-21 09:55:43.209004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:06.676 [2024-10-21 09:55:43.209016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.676 [2024-10-21 09:55:43.209483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.676 [2024-10-21 09:55:43.209503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.676 [2024-10-21 09:55:43.209600] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:06.677 [2024-10-21 09:55:43.209628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.677 pt2 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.677 [2024-10-21 09:55:43.216964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.677 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.937 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.937 "name": "raid_bdev1", 00:11:06.937 "uuid": "09b329a8-8ad4-4ba7-9a65-936769cdf9e0", 00:11:06.937 "strip_size_kb": 64, 00:11:06.937 "state": "configuring", 00:11:06.937 "raid_level": "raid0", 00:11:06.937 "superblock": true, 00:11:06.937 "num_base_bdevs": 4, 00:11:06.937 "num_base_bdevs_discovered": 1, 00:11:06.937 "num_base_bdevs_operational": 4, 00:11:06.937 "base_bdevs_list": [ 00:11:06.937 { 00:11:06.937 "name": "pt1", 00:11:06.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.937 "is_configured": true, 00:11:06.937 "data_offset": 2048, 00:11:06.937 "data_size": 63488 00:11:06.937 }, 00:11:06.937 { 00:11:06.937 "name": null, 00:11:06.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.937 "is_configured": false, 00:11:06.937 "data_offset": 0, 00:11:06.937 "data_size": 63488 00:11:06.937 }, 00:11:06.937 { 00:11:06.937 "name": null, 00:11:06.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.937 "is_configured": false, 00:11:06.937 "data_offset": 2048, 00:11:06.937 "data_size": 63488 00:11:06.937 }, 00:11:06.937 { 00:11:06.937 "name": null, 00:11:06.937 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.937 "is_configured": false, 00:11:06.937 "data_offset": 2048, 00:11:06.937 "data_size": 63488 00:11:06.937 } 00:11:06.937 ] 00:11:06.937 }' 00:11:06.937 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.937 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.198 [2024-10-21 09:55:43.692130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:07.198 [2024-10-21 09:55:43.692245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.198 [2024-10-21 09:55:43.692287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:07.198 [2024-10-21 09:55:43.692338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.198 [2024-10-21 09:55:43.692841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.198 [2024-10-21 09:55:43.692899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:07.198 [2024-10-21 09:55:43.693025] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:07.198 [2024-10-21 09:55:43.693083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.198 pt2 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.198 [2024-10-21 09:55:43.704051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:07.198 [2024-10-21 09:55:43.704128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.198 [2024-10-21 09:55:43.704173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:07.198 [2024-10-21 09:55:43.704218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.198 [2024-10-21 09:55:43.704633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.198 [2024-10-21 09:55:43.704655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:07.198 [2024-10-21 09:55:43.704718] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:07.198 [2024-10-21 09:55:43.704735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:07.198 pt3 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.198 [2024-10-21 09:55:43.716010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:07.198 [2024-10-21 09:55:43.716058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.198 [2024-10-21 09:55:43.716075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:07.198 [2024-10-21 09:55:43.716083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.198 [2024-10-21 09:55:43.716416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.198 [2024-10-21 09:55:43.716431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:07.198 [2024-10-21 09:55:43.716484] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:07.198 [2024-10-21 09:55:43.716500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:07.198 [2024-10-21 09:55:43.716649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:07.198 [2024-10-21 09:55:43.716659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:07.198 [2024-10-21 09:55:43.716883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:07.198 [2024-10-21 09:55:43.717020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:07.198 [2024-10-21 09:55:43.717038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:07.198 [2024-10-21 09:55:43.717182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.198 pt4 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.198 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.199 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.199 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.199 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.199 "name": "raid_bdev1", 00:11:07.199 "uuid": "09b329a8-8ad4-4ba7-9a65-936769cdf9e0", 00:11:07.199 "strip_size_kb": 64, 00:11:07.199 "state": "online", 00:11:07.199 "raid_level": "raid0", 00:11:07.199 "superblock": true, 00:11:07.199 "num_base_bdevs": 4, 00:11:07.199 "num_base_bdevs_discovered": 4, 00:11:07.199 "num_base_bdevs_operational": 4, 00:11:07.199 "base_bdevs_list": [ 00:11:07.199 { 00:11:07.199 "name": "pt1", 00:11:07.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.199 "is_configured": true, 00:11:07.199 "data_offset": 2048, 00:11:07.199 "data_size": 63488 00:11:07.199 }, 00:11:07.199 { 00:11:07.199 "name": "pt2", 00:11:07.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.199 "is_configured": true, 00:11:07.199 "data_offset": 2048, 00:11:07.199 "data_size": 63488 00:11:07.199 }, 00:11:07.199 { 00:11:07.199 "name": "pt3", 00:11:07.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.199 "is_configured": true, 00:11:07.199 "data_offset": 2048, 00:11:07.199 "data_size": 63488 00:11:07.199 }, 00:11:07.199 { 00:11:07.199 "name": "pt4", 00:11:07.199 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.199 "is_configured": true, 00:11:07.199 "data_offset": 2048, 00:11:07.199 "data_size": 63488 00:11:07.199 } 00:11:07.199 ] 00:11:07.199 }' 00:11:07.199 09:55:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.199 09:55:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.769 [2024-10-21 09:55:44.171633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.769 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.769 "name": "raid_bdev1", 00:11:07.769 "aliases": [ 00:11:07.769 "09b329a8-8ad4-4ba7-9a65-936769cdf9e0" 00:11:07.769 ], 00:11:07.769 "product_name": "Raid Volume", 00:11:07.769 "block_size": 512, 00:11:07.769 "num_blocks": 253952, 00:11:07.769 "uuid": "09b329a8-8ad4-4ba7-9a65-936769cdf9e0", 00:11:07.769 "assigned_rate_limits": { 00:11:07.769 "rw_ios_per_sec": 0, 00:11:07.769 "rw_mbytes_per_sec": 0, 00:11:07.769 "r_mbytes_per_sec": 0, 00:11:07.769 "w_mbytes_per_sec": 0 00:11:07.769 }, 00:11:07.769 "claimed": false, 00:11:07.769 "zoned": false, 00:11:07.769 "supported_io_types": { 00:11:07.769 "read": true, 00:11:07.769 "write": true, 00:11:07.769 "unmap": true, 00:11:07.769 "flush": true, 00:11:07.769 "reset": true, 00:11:07.769 "nvme_admin": false, 00:11:07.769 "nvme_io": false, 00:11:07.769 "nvme_io_md": false, 00:11:07.769 "write_zeroes": true, 00:11:07.769 "zcopy": false, 00:11:07.769 "get_zone_info": false, 00:11:07.769 "zone_management": false, 00:11:07.769 "zone_append": false, 00:11:07.769 "compare": false, 00:11:07.769 "compare_and_write": false, 00:11:07.769 "abort": false, 00:11:07.769 "seek_hole": false, 00:11:07.769 "seek_data": false, 00:11:07.769 "copy": false, 00:11:07.769 "nvme_iov_md": false 00:11:07.769 }, 00:11:07.769 "memory_domains": [ 00:11:07.769 { 00:11:07.769 "dma_device_id": "system", 00:11:07.769 "dma_device_type": 1 00:11:07.769 }, 00:11:07.769 { 00:11:07.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.769 "dma_device_type": 2 00:11:07.769 }, 00:11:07.769 { 00:11:07.769 "dma_device_id": "system", 00:11:07.769 "dma_device_type": 1 00:11:07.769 }, 00:11:07.769 { 00:11:07.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.769 "dma_device_type": 2 00:11:07.769 }, 00:11:07.769 { 00:11:07.769 "dma_device_id": "system", 00:11:07.769 "dma_device_type": 1 00:11:07.769 }, 00:11:07.769 { 00:11:07.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.769 "dma_device_type": 2 00:11:07.769 }, 00:11:07.769 { 00:11:07.769 "dma_device_id": "system", 00:11:07.769 "dma_device_type": 1 00:11:07.769 }, 00:11:07.769 { 00:11:07.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.769 "dma_device_type": 2 00:11:07.769 } 00:11:07.769 ], 00:11:07.769 "driver_specific": { 00:11:07.769 "raid": { 00:11:07.769 "uuid": "09b329a8-8ad4-4ba7-9a65-936769cdf9e0", 00:11:07.769 "strip_size_kb": 64, 00:11:07.769 "state": "online", 00:11:07.769 "raid_level": "raid0", 00:11:07.769 "superblock": true, 00:11:07.769 "num_base_bdevs": 4, 00:11:07.769 "num_base_bdevs_discovered": 4, 00:11:07.769 "num_base_bdevs_operational": 4, 00:11:07.769 "base_bdevs_list": [ 00:11:07.769 { 00:11:07.769 "name": "pt1", 00:11:07.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.769 "is_configured": true, 00:11:07.769 "data_offset": 2048, 00:11:07.769 "data_size": 63488 00:11:07.769 }, 00:11:07.769 { 00:11:07.769 "name": "pt2", 00:11:07.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.769 "is_configured": true, 00:11:07.769 "data_offset": 2048, 00:11:07.769 "data_size": 63488 00:11:07.769 }, 00:11:07.769 { 00:11:07.769 "name": "pt3", 00:11:07.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.769 "is_configured": true, 00:11:07.769 "data_offset": 2048, 00:11:07.769 "data_size": 63488 00:11:07.769 }, 00:11:07.769 { 00:11:07.769 "name": "pt4", 00:11:07.769 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.770 "is_configured": true, 00:11:07.770 "data_offset": 2048, 00:11:07.770 "data_size": 63488 00:11:07.770 } 00:11:07.770 ] 00:11:07.770 } 00:11:07.770 } 00:11:07.770 }' 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:07.770 pt2 00:11:07.770 pt3 00:11:07.770 pt4' 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.770 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:08.030 [2024-10-21 09:55:44.491014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 09b329a8-8ad4-4ba7-9a65-936769cdf9e0 '!=' 09b329a8-8ad4-4ba7-9a65-936769cdf9e0 ']' 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70296 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70296 ']' 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70296 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70296 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70296' 00:11:08.030 killing process with pid 70296 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70296 00:11:08.030 [2024-10-21 09:55:44.577641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.030 [2024-10-21 09:55:44.577805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.030 09:55:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70296 00:11:08.030 [2024-10-21 09:55:44.577961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.030 [2024-10-21 09:55:44.578009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:08.622 [2024-10-21 09:55:44.991977] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.560 09:55:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:09.560 00:11:09.560 real 0m5.704s 00:11:09.560 user 0m8.125s 00:11:09.560 sys 0m0.996s 00:11:09.560 09:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.560 09:55:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.560 ************************************ 00:11:09.560 END TEST raid_superblock_test 00:11:09.560 ************************************ 00:11:09.820 09:55:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:09.820 09:55:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:09.820 09:55:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.820 09:55:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.820 ************************************ 00:11:09.820 START TEST raid_read_error_test 00:11:09.820 ************************************ 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mJJyQ40RdD 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70561 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70561 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 70561 ']' 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.820 09:55:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.821 [2024-10-21 09:55:46.325684] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:09.821 [2024-10-21 09:55:46.325813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70561 ] 00:11:10.080 [2024-10-21 09:55:46.486756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.080 [2024-10-21 09:55:46.607287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.347 [2024-10-21 09:55:46.833137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.347 [2024-10-21 09:55:46.833182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.607 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.607 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:10.607 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.607 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:10.607 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.607 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.867 BaseBdev1_malloc 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.867 true 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.867 [2024-10-21 09:55:47.229486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:10.867 [2024-10-21 09:55:47.229543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.867 [2024-10-21 09:55:47.229561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:10.867 [2024-10-21 09:55:47.229588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.867 [2024-10-21 09:55:47.231916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.867 [2024-10-21 09:55:47.231957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:10.867 BaseBdev1 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.867 BaseBdev2_malloc 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.867 true 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.867 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.867 [2024-10-21 09:55:47.300588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:10.867 [2024-10-21 09:55:47.300638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.867 [2024-10-21 09:55:47.300671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:10.867 [2024-10-21 09:55:47.300682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.867 [2024-10-21 09:55:47.302763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.868 [2024-10-21 09:55:47.302801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:10.868 BaseBdev2 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.868 BaseBdev3_malloc 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.868 true 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.868 [2024-10-21 09:55:47.386539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:10.868 [2024-10-21 09:55:47.386615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.868 [2024-10-21 09:55:47.386637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:10.868 [2024-10-21 09:55:47.386649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.868 [2024-10-21 09:55:47.388846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.868 [2024-10-21 09:55:47.388886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:10.868 BaseBdev3 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.868 BaseBdev4_malloc 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.868 true 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.868 [2024-10-21 09:55:47.454722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:10.868 [2024-10-21 09:55:47.454777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.868 [2024-10-21 09:55:47.454795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:10.868 [2024-10-21 09:55:47.454805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.868 [2024-10-21 09:55:47.456895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.868 [2024-10-21 09:55:47.456936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:10.868 BaseBdev4 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.868 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.128 [2024-10-21 09:55:47.466773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.128 [2024-10-21 09:55:47.468630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.128 [2024-10-21 09:55:47.468704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.128 [2024-10-21 09:55:47.468760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:11.128 [2024-10-21 09:55:47.468963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:11.128 [2024-10-21 09:55:47.468979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:11.128 [2024-10-21 09:55:47.469220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:11.128 [2024-10-21 09:55:47.469380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:11.128 [2024-10-21 09:55:47.469389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:11.128 [2024-10-21 09:55:47.469527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.128 "name": "raid_bdev1", 00:11:11.128 "uuid": "a0962c34-4af3-4a30-b12b-cd228ee9b8a5", 00:11:11.128 "strip_size_kb": 64, 00:11:11.128 "state": "online", 00:11:11.128 "raid_level": "raid0", 00:11:11.128 "superblock": true, 00:11:11.128 "num_base_bdevs": 4, 00:11:11.128 "num_base_bdevs_discovered": 4, 00:11:11.128 "num_base_bdevs_operational": 4, 00:11:11.128 "base_bdevs_list": [ 00:11:11.128 { 00:11:11.128 "name": "BaseBdev1", 00:11:11.128 "uuid": "177c34ee-c48a-5672-a0c6-1a33b9e3b567", 00:11:11.128 "is_configured": true, 00:11:11.128 "data_offset": 2048, 00:11:11.128 "data_size": 63488 00:11:11.128 }, 00:11:11.128 { 00:11:11.128 "name": "BaseBdev2", 00:11:11.128 "uuid": "3e317ca4-e4da-5d98-81aa-ccab4f4e9bb0", 00:11:11.128 "is_configured": true, 00:11:11.128 "data_offset": 2048, 00:11:11.128 "data_size": 63488 00:11:11.128 }, 00:11:11.128 { 00:11:11.128 "name": "BaseBdev3", 00:11:11.128 "uuid": "2e08a6ec-685a-59fd-b271-3e2c707ebf1d", 00:11:11.128 "is_configured": true, 00:11:11.128 "data_offset": 2048, 00:11:11.128 "data_size": 63488 00:11:11.128 }, 00:11:11.128 { 00:11:11.128 "name": "BaseBdev4", 00:11:11.128 "uuid": "b8823856-2da8-5321-94cd-5f442c713bb4", 00:11:11.128 "is_configured": true, 00:11:11.128 "data_offset": 2048, 00:11:11.128 "data_size": 63488 00:11:11.128 } 00:11:11.128 ] 00:11:11.128 }' 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.128 09:55:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.388 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:11.388 09:55:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:11.388 [2024-10-21 09:55:47.967677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.327 09:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.587 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.587 "name": "raid_bdev1", 00:11:12.587 "uuid": "a0962c34-4af3-4a30-b12b-cd228ee9b8a5", 00:11:12.587 "strip_size_kb": 64, 00:11:12.587 "state": "online", 00:11:12.587 "raid_level": "raid0", 00:11:12.587 "superblock": true, 00:11:12.587 "num_base_bdevs": 4, 00:11:12.587 "num_base_bdevs_discovered": 4, 00:11:12.587 "num_base_bdevs_operational": 4, 00:11:12.587 "base_bdevs_list": [ 00:11:12.587 { 00:11:12.587 "name": "BaseBdev1", 00:11:12.587 "uuid": "177c34ee-c48a-5672-a0c6-1a33b9e3b567", 00:11:12.587 "is_configured": true, 00:11:12.587 "data_offset": 2048, 00:11:12.587 "data_size": 63488 00:11:12.587 }, 00:11:12.587 { 00:11:12.587 "name": "BaseBdev2", 00:11:12.587 "uuid": "3e317ca4-e4da-5d98-81aa-ccab4f4e9bb0", 00:11:12.587 "is_configured": true, 00:11:12.587 "data_offset": 2048, 00:11:12.587 "data_size": 63488 00:11:12.587 }, 00:11:12.587 { 00:11:12.587 "name": "BaseBdev3", 00:11:12.587 "uuid": "2e08a6ec-685a-59fd-b271-3e2c707ebf1d", 00:11:12.587 "is_configured": true, 00:11:12.587 "data_offset": 2048, 00:11:12.587 "data_size": 63488 00:11:12.587 }, 00:11:12.587 { 00:11:12.587 "name": "BaseBdev4", 00:11:12.587 "uuid": "b8823856-2da8-5321-94cd-5f442c713bb4", 00:11:12.587 "is_configured": true, 00:11:12.587 "data_offset": 2048, 00:11:12.587 "data_size": 63488 00:11:12.587 } 00:11:12.587 ] 00:11:12.587 }' 00:11:12.587 09:55:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.587 09:55:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.847 [2024-10-21 09:55:49.326067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.847 [2024-10-21 09:55:49.326103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.847 [2024-10-21 09:55:49.328909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.847 [2024-10-21 09:55:49.329009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.847 [2024-10-21 09:55:49.329074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.847 [2024-10-21 09:55:49.329132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.847 { 00:11:12.847 "results": [ 00:11:12.847 { 00:11:12.847 "job": "raid_bdev1", 00:11:12.847 "core_mask": "0x1", 00:11:12.847 "workload": "randrw", 00:11:12.847 "percentage": 50, 00:11:12.847 "status": "finished", 00:11:12.847 "queue_depth": 1, 00:11:12.847 "io_size": 131072, 00:11:12.847 "runtime": 1.358951, 00:11:12.847 "iops": 15068.240135221948, 00:11:12.847 "mibps": 1883.5300169027435, 00:11:12.847 "io_failed": 1, 00:11:12.847 "io_timeout": 0, 00:11:12.847 "avg_latency_us": 92.16526961941476, 00:11:12.847 "min_latency_us": 26.1589519650655, 00:11:12.847 "max_latency_us": 1423.7624454148472 00:11:12.847 } 00:11:12.847 ], 00:11:12.847 "core_count": 1 00:11:12.847 } 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70561 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 70561 ']' 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 70561 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70561 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70561' 00:11:12.847 killing process with pid 70561 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 70561 00:11:12.847 [2024-10-21 09:55:49.376402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.847 09:55:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 70561 00:11:13.416 [2024-10-21 09:55:49.726020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.799 09:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mJJyQ40RdD 00:11:14.799 09:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:14.799 09:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:14.799 09:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:14.799 09:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:14.799 09:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.799 09:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.799 09:55:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:14.799 00:11:14.799 real 0m4.751s 00:11:14.799 user 0m5.587s 00:11:14.799 sys 0m0.558s 00:11:14.799 ************************************ 00:11:14.799 END TEST raid_read_error_test 00:11:14.799 ************************************ 00:11:14.799 09:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.799 09:55:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.799 09:55:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:14.799 09:55:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:14.799 09:55:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.799 09:55:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.799 ************************************ 00:11:14.799 START TEST raid_write_error_test 00:11:14.799 ************************************ 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MuCRIXS6DW 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70707 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70707 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 70707 ']' 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.799 09:55:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.799 [2024-10-21 09:55:51.148332] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:14.799 [2024-10-21 09:55:51.148545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70707 ] 00:11:14.799 [2024-10-21 09:55:51.312847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.059 [2024-10-21 09:55:51.436113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.319 [2024-10-21 09:55:51.666680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.319 [2024-10-21 09:55:51.666816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.579 BaseBdev1_malloc 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.579 true 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.579 [2024-10-21 09:55:52.101957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:15.579 [2024-10-21 09:55:52.102062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.579 [2024-10-21 09:55:52.102088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:15.579 [2024-10-21 09:55:52.102104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.579 [2024-10-21 09:55:52.104415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.579 [2024-10-21 09:55:52.104459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:15.579 BaseBdev1 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.579 BaseBdev2_malloc 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.579 true 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.579 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.838 [2024-10-21 09:55:52.174308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:15.838 [2024-10-21 09:55:52.174432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.838 [2024-10-21 09:55:52.174499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:15.838 [2024-10-21 09:55:52.174546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.838 [2024-10-21 09:55:52.177090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.838 [2024-10-21 09:55:52.177175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:15.838 BaseBdev2 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.838 BaseBdev3_malloc 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.838 true 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.838 [2024-10-21 09:55:52.260352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:15.838 [2024-10-21 09:55:52.260421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.838 [2024-10-21 09:55:52.260463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:15.838 [2024-10-21 09:55:52.260475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.838 [2024-10-21 09:55:52.262844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.838 [2024-10-21 09:55:52.262960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:15.838 BaseBdev3 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.838 BaseBdev4_malloc 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.838 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.839 true 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.839 [2024-10-21 09:55:52.326331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:15.839 [2024-10-21 09:55:52.326393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.839 [2024-10-21 09:55:52.326416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:15.839 [2024-10-21 09:55:52.326428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.839 [2024-10-21 09:55:52.328883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.839 [2024-10-21 09:55:52.328976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:15.839 BaseBdev4 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.839 [2024-10-21 09:55:52.338382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.839 [2024-10-21 09:55:52.340492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.839 [2024-10-21 09:55:52.340643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.839 [2024-10-21 09:55:52.340736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.839 [2024-10-21 09:55:52.341028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:15.839 [2024-10-21 09:55:52.341048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:15.839 [2024-10-21 09:55:52.341361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:15.839 [2024-10-21 09:55:52.341563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:15.839 [2024-10-21 09:55:52.341595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:15.839 [2024-10-21 09:55:52.341811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.839 "name": "raid_bdev1", 00:11:15.839 "uuid": "86d16745-b89e-407b-a1f1-2d1593745fae", 00:11:15.839 "strip_size_kb": 64, 00:11:15.839 "state": "online", 00:11:15.839 "raid_level": "raid0", 00:11:15.839 "superblock": true, 00:11:15.839 "num_base_bdevs": 4, 00:11:15.839 "num_base_bdevs_discovered": 4, 00:11:15.839 "num_base_bdevs_operational": 4, 00:11:15.839 "base_bdevs_list": [ 00:11:15.839 { 00:11:15.839 "name": "BaseBdev1", 00:11:15.839 "uuid": "0e78516e-3885-5279-8333-5fbf1e7682b3", 00:11:15.839 "is_configured": true, 00:11:15.839 "data_offset": 2048, 00:11:15.839 "data_size": 63488 00:11:15.839 }, 00:11:15.839 { 00:11:15.839 "name": "BaseBdev2", 00:11:15.839 "uuid": "a138d3db-efb6-55e2-8528-37d253895797", 00:11:15.839 "is_configured": true, 00:11:15.839 "data_offset": 2048, 00:11:15.839 "data_size": 63488 00:11:15.839 }, 00:11:15.839 { 00:11:15.839 "name": "BaseBdev3", 00:11:15.839 "uuid": "1b142c2b-0a6e-50c6-a671-61ea00036ff4", 00:11:15.839 "is_configured": true, 00:11:15.839 "data_offset": 2048, 00:11:15.839 "data_size": 63488 00:11:15.839 }, 00:11:15.839 { 00:11:15.839 "name": "BaseBdev4", 00:11:15.839 "uuid": "91c0d6e2-3d7c-5022-824b-a49de44f1e35", 00:11:15.839 "is_configured": true, 00:11:15.839 "data_offset": 2048, 00:11:15.839 "data_size": 63488 00:11:15.839 } 00:11:15.839 ] 00:11:15.839 }' 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.839 09:55:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.406 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:16.406 09:55:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:16.406 [2024-10-21 09:55:52.875101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.421 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.421 "name": "raid_bdev1", 00:11:17.421 "uuid": "86d16745-b89e-407b-a1f1-2d1593745fae", 00:11:17.421 "strip_size_kb": 64, 00:11:17.421 "state": "online", 00:11:17.421 "raid_level": "raid0", 00:11:17.421 "superblock": true, 00:11:17.421 "num_base_bdevs": 4, 00:11:17.421 "num_base_bdevs_discovered": 4, 00:11:17.421 "num_base_bdevs_operational": 4, 00:11:17.421 "base_bdevs_list": [ 00:11:17.421 { 00:11:17.421 "name": "BaseBdev1", 00:11:17.421 "uuid": "0e78516e-3885-5279-8333-5fbf1e7682b3", 00:11:17.421 "is_configured": true, 00:11:17.421 "data_offset": 2048, 00:11:17.421 "data_size": 63488 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "name": "BaseBdev2", 00:11:17.421 "uuid": "a138d3db-efb6-55e2-8528-37d253895797", 00:11:17.421 "is_configured": true, 00:11:17.421 "data_offset": 2048, 00:11:17.421 "data_size": 63488 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "name": "BaseBdev3", 00:11:17.421 "uuid": "1b142c2b-0a6e-50c6-a671-61ea00036ff4", 00:11:17.421 "is_configured": true, 00:11:17.421 "data_offset": 2048, 00:11:17.421 "data_size": 63488 00:11:17.421 }, 00:11:17.421 { 00:11:17.421 "name": "BaseBdev4", 00:11:17.421 "uuid": "91c0d6e2-3d7c-5022-824b-a49de44f1e35", 00:11:17.421 "is_configured": true, 00:11:17.421 "data_offset": 2048, 00:11:17.421 "data_size": 63488 00:11:17.422 } 00:11:17.422 ] 00:11:17.422 }' 00:11:17.422 09:55:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.422 09:55:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.681 09:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.681 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.681 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.681 [2024-10-21 09:55:54.263442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.681 [2024-10-21 09:55:54.263545] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.681 [2024-10-21 09:55:54.266151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.681 [2024-10-21 09:55:54.266208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.681 [2024-10-21 09:55:54.266254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.681 [2024-10-21 09:55:54.266265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:17.681 { 00:11:17.681 "results": [ 00:11:17.681 { 00:11:17.681 "job": "raid_bdev1", 00:11:17.681 "core_mask": "0x1", 00:11:17.681 "workload": "randrw", 00:11:17.682 "percentage": 50, 00:11:17.682 "status": "finished", 00:11:17.682 "queue_depth": 1, 00:11:17.682 "io_size": 131072, 00:11:17.682 "runtime": 1.389066, 00:11:17.682 "iops": 15205.900943511684, 00:11:17.682 "mibps": 1900.7376179389605, 00:11:17.682 "io_failed": 1, 00:11:17.682 "io_timeout": 0, 00:11:17.682 "avg_latency_us": 91.50297965730766, 00:11:17.682 "min_latency_us": 25.2646288209607, 00:11:17.682 "max_latency_us": 1395.1441048034935 00:11:17.682 } 00:11:17.682 ], 00:11:17.682 "core_count": 1 00:11:17.682 } 00:11:17.682 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.682 09:55:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70707 00:11:17.682 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 70707 ']' 00:11:17.682 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 70707 00:11:17.682 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:17.942 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.942 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70707 00:11:17.942 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:17.942 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:17.942 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70707' 00:11:17.942 killing process with pid 70707 00:11:17.942 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 70707 00:11:17.942 [2024-10-21 09:55:54.310448] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.942 09:55:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 70707 00:11:18.201 [2024-10-21 09:55:54.633574] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.583 09:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:19.583 09:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MuCRIXS6DW 00:11:19.583 09:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:19.583 09:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:19.583 09:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:19.583 09:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.583 09:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:19.583 ************************************ 00:11:19.583 END TEST raid_write_error_test 00:11:19.583 ************************************ 00:11:19.583 09:55:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:19.583 00:11:19.583 real 0m4.765s 00:11:19.583 user 0m5.640s 00:11:19.583 sys 0m0.587s 00:11:19.583 09:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.583 09:55:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.583 09:55:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:19.583 09:55:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:19.583 09:55:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:19.583 09:55:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.583 09:55:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.583 ************************************ 00:11:19.583 START TEST raid_state_function_test 00:11:19.583 ************************************ 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:19.583 Process raid pid: 70850 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=70850 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70850' 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 70850 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 70850 ']' 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.583 09:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.583 [2024-10-21 09:55:55.971726] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:19.583 [2024-10-21 09:55:55.971840] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.583 [2024-10-21 09:55:56.137163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.842 [2024-10-21 09:55:56.265875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.102 [2024-10-21 09:55:56.484999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.102 [2024-10-21 09:55:56.485037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.362 [2024-10-21 09:55:56.832101] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.362 [2024-10-21 09:55:56.832158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.362 [2024-10-21 09:55:56.832169] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.362 [2024-10-21 09:55:56.832178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.362 [2024-10-21 09:55:56.832185] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.362 [2024-10-21 09:55:56.832194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.362 [2024-10-21 09:55:56.832200] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:20.362 [2024-10-21 09:55:56.832208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.362 "name": "Existed_Raid", 00:11:20.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.362 "strip_size_kb": 64, 00:11:20.362 "state": "configuring", 00:11:20.362 "raid_level": "concat", 00:11:20.362 "superblock": false, 00:11:20.362 "num_base_bdevs": 4, 00:11:20.362 "num_base_bdevs_discovered": 0, 00:11:20.362 "num_base_bdevs_operational": 4, 00:11:20.362 "base_bdevs_list": [ 00:11:20.362 { 00:11:20.362 "name": "BaseBdev1", 00:11:20.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.362 "is_configured": false, 00:11:20.362 "data_offset": 0, 00:11:20.362 "data_size": 0 00:11:20.362 }, 00:11:20.362 { 00:11:20.362 "name": "BaseBdev2", 00:11:20.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.362 "is_configured": false, 00:11:20.362 "data_offset": 0, 00:11:20.362 "data_size": 0 00:11:20.362 }, 00:11:20.362 { 00:11:20.362 "name": "BaseBdev3", 00:11:20.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.362 "is_configured": false, 00:11:20.362 "data_offset": 0, 00:11:20.362 "data_size": 0 00:11:20.362 }, 00:11:20.362 { 00:11:20.362 "name": "BaseBdev4", 00:11:20.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.362 "is_configured": false, 00:11:20.362 "data_offset": 0, 00:11:20.362 "data_size": 0 00:11:20.362 } 00:11:20.362 ] 00:11:20.362 }' 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.362 09:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 [2024-10-21 09:55:57.291248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.932 [2024-10-21 09:55:57.291357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 [2024-10-21 09:55:57.303261] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.932 [2024-10-21 09:55:57.303342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.932 [2024-10-21 09:55:57.303373] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:20.932 [2024-10-21 09:55:57.303396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:20.932 [2024-10-21 09:55:57.303435] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:20.932 [2024-10-21 09:55:57.303459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:20.932 [2024-10-21 09:55:57.303505] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:20.932 [2024-10-21 09:55:57.303547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 [2024-10-21 09:55:57.353412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.932 BaseBdev1 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 [ 00:11:20.932 { 00:11:20.932 "name": "BaseBdev1", 00:11:20.932 "aliases": [ 00:11:20.932 "56bb28ef-bcf3-45a1-8de4-bf37813685a5" 00:11:20.932 ], 00:11:20.932 "product_name": "Malloc disk", 00:11:20.932 "block_size": 512, 00:11:20.932 "num_blocks": 65536, 00:11:20.932 "uuid": "56bb28ef-bcf3-45a1-8de4-bf37813685a5", 00:11:20.932 "assigned_rate_limits": { 00:11:20.932 "rw_ios_per_sec": 0, 00:11:20.932 "rw_mbytes_per_sec": 0, 00:11:20.932 "r_mbytes_per_sec": 0, 00:11:20.932 "w_mbytes_per_sec": 0 00:11:20.932 }, 00:11:20.932 "claimed": true, 00:11:20.932 "claim_type": "exclusive_write", 00:11:20.932 "zoned": false, 00:11:20.932 "supported_io_types": { 00:11:20.932 "read": true, 00:11:20.932 "write": true, 00:11:20.932 "unmap": true, 00:11:20.932 "flush": true, 00:11:20.932 "reset": true, 00:11:20.932 "nvme_admin": false, 00:11:20.932 "nvme_io": false, 00:11:20.932 "nvme_io_md": false, 00:11:20.932 "write_zeroes": true, 00:11:20.932 "zcopy": true, 00:11:20.932 "get_zone_info": false, 00:11:20.932 "zone_management": false, 00:11:20.932 "zone_append": false, 00:11:20.932 "compare": false, 00:11:20.932 "compare_and_write": false, 00:11:20.932 "abort": true, 00:11:20.932 "seek_hole": false, 00:11:20.932 "seek_data": false, 00:11:20.932 "copy": true, 00:11:20.932 "nvme_iov_md": false 00:11:20.932 }, 00:11:20.932 "memory_domains": [ 00:11:20.932 { 00:11:20.932 "dma_device_id": "system", 00:11:20.932 "dma_device_type": 1 00:11:20.932 }, 00:11:20.932 { 00:11:20.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.932 "dma_device_type": 2 00:11:20.932 } 00:11:20.932 ], 00:11:20.932 "driver_specific": {} 00:11:20.932 } 00:11:20.932 ] 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.932 "name": "Existed_Raid", 00:11:20.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.932 "strip_size_kb": 64, 00:11:20.932 "state": "configuring", 00:11:20.932 "raid_level": "concat", 00:11:20.932 "superblock": false, 00:11:20.932 "num_base_bdevs": 4, 00:11:20.932 "num_base_bdevs_discovered": 1, 00:11:20.932 "num_base_bdevs_operational": 4, 00:11:20.932 "base_bdevs_list": [ 00:11:20.932 { 00:11:20.932 "name": "BaseBdev1", 00:11:20.932 "uuid": "56bb28ef-bcf3-45a1-8de4-bf37813685a5", 00:11:20.932 "is_configured": true, 00:11:20.932 "data_offset": 0, 00:11:20.932 "data_size": 65536 00:11:20.932 }, 00:11:20.932 { 00:11:20.932 "name": "BaseBdev2", 00:11:20.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.932 "is_configured": false, 00:11:20.932 "data_offset": 0, 00:11:20.932 "data_size": 0 00:11:20.932 }, 00:11:20.932 { 00:11:20.932 "name": "BaseBdev3", 00:11:20.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.932 "is_configured": false, 00:11:20.932 "data_offset": 0, 00:11:20.932 "data_size": 0 00:11:20.932 }, 00:11:20.932 { 00:11:20.932 "name": "BaseBdev4", 00:11:20.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.932 "is_configured": false, 00:11:20.932 "data_offset": 0, 00:11:20.932 "data_size": 0 00:11:20.932 } 00:11:20.932 ] 00:11:20.932 }' 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.932 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.502 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.502 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.503 [2024-10-21 09:55:57.800681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.503 [2024-10-21 09:55:57.800814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.503 [2024-10-21 09:55:57.808733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.503 [2024-10-21 09:55:57.810702] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.503 [2024-10-21 09:55:57.810782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.503 [2024-10-21 09:55:57.810818] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.503 [2024-10-21 09:55:57.810847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.503 [2024-10-21 09:55:57.810888] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.503 [2024-10-21 09:55:57.810914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.503 "name": "Existed_Raid", 00:11:21.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.503 "strip_size_kb": 64, 00:11:21.503 "state": "configuring", 00:11:21.503 "raid_level": "concat", 00:11:21.503 "superblock": false, 00:11:21.503 "num_base_bdevs": 4, 00:11:21.503 "num_base_bdevs_discovered": 1, 00:11:21.503 "num_base_bdevs_operational": 4, 00:11:21.503 "base_bdevs_list": [ 00:11:21.503 { 00:11:21.503 "name": "BaseBdev1", 00:11:21.503 "uuid": "56bb28ef-bcf3-45a1-8de4-bf37813685a5", 00:11:21.503 "is_configured": true, 00:11:21.503 "data_offset": 0, 00:11:21.503 "data_size": 65536 00:11:21.503 }, 00:11:21.503 { 00:11:21.503 "name": "BaseBdev2", 00:11:21.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.503 "is_configured": false, 00:11:21.503 "data_offset": 0, 00:11:21.503 "data_size": 0 00:11:21.503 }, 00:11:21.503 { 00:11:21.503 "name": "BaseBdev3", 00:11:21.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.503 "is_configured": false, 00:11:21.503 "data_offset": 0, 00:11:21.503 "data_size": 0 00:11:21.503 }, 00:11:21.503 { 00:11:21.503 "name": "BaseBdev4", 00:11:21.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.503 "is_configured": false, 00:11:21.503 "data_offset": 0, 00:11:21.503 "data_size": 0 00:11:21.503 } 00:11:21.503 ] 00:11:21.503 }' 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.503 09:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.763 [2024-10-21 09:55:58.304344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.763 BaseBdev2 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.763 [ 00:11:21.763 { 00:11:21.763 "name": "BaseBdev2", 00:11:21.763 "aliases": [ 00:11:21.763 "b09c8901-1369-4a50-b998-c1fe5fe6f68f" 00:11:21.763 ], 00:11:21.763 "product_name": "Malloc disk", 00:11:21.763 "block_size": 512, 00:11:21.763 "num_blocks": 65536, 00:11:21.763 "uuid": "b09c8901-1369-4a50-b998-c1fe5fe6f68f", 00:11:21.763 "assigned_rate_limits": { 00:11:21.763 "rw_ios_per_sec": 0, 00:11:21.763 "rw_mbytes_per_sec": 0, 00:11:21.763 "r_mbytes_per_sec": 0, 00:11:21.763 "w_mbytes_per_sec": 0 00:11:21.763 }, 00:11:21.763 "claimed": true, 00:11:21.763 "claim_type": "exclusive_write", 00:11:21.763 "zoned": false, 00:11:21.763 "supported_io_types": { 00:11:21.763 "read": true, 00:11:21.763 "write": true, 00:11:21.763 "unmap": true, 00:11:21.763 "flush": true, 00:11:21.763 "reset": true, 00:11:21.763 "nvme_admin": false, 00:11:21.763 "nvme_io": false, 00:11:21.763 "nvme_io_md": false, 00:11:21.763 "write_zeroes": true, 00:11:21.763 "zcopy": true, 00:11:21.763 "get_zone_info": false, 00:11:21.763 "zone_management": false, 00:11:21.763 "zone_append": false, 00:11:21.763 "compare": false, 00:11:21.763 "compare_and_write": false, 00:11:21.763 "abort": true, 00:11:21.763 "seek_hole": false, 00:11:21.763 "seek_data": false, 00:11:21.763 "copy": true, 00:11:21.763 "nvme_iov_md": false 00:11:21.763 }, 00:11:21.763 "memory_domains": [ 00:11:21.763 { 00:11:21.763 "dma_device_id": "system", 00:11:21.763 "dma_device_type": 1 00:11:21.763 }, 00:11:21.763 { 00:11:21.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.763 "dma_device_type": 2 00:11:21.763 } 00:11:21.763 ], 00:11:21.763 "driver_specific": {} 00:11:21.763 } 00:11:21.763 ] 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.763 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.023 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.023 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.023 "name": "Existed_Raid", 00:11:22.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.023 "strip_size_kb": 64, 00:11:22.023 "state": "configuring", 00:11:22.023 "raid_level": "concat", 00:11:22.023 "superblock": false, 00:11:22.023 "num_base_bdevs": 4, 00:11:22.023 "num_base_bdevs_discovered": 2, 00:11:22.023 "num_base_bdevs_operational": 4, 00:11:22.023 "base_bdevs_list": [ 00:11:22.023 { 00:11:22.023 "name": "BaseBdev1", 00:11:22.023 "uuid": "56bb28ef-bcf3-45a1-8de4-bf37813685a5", 00:11:22.023 "is_configured": true, 00:11:22.023 "data_offset": 0, 00:11:22.023 "data_size": 65536 00:11:22.023 }, 00:11:22.023 { 00:11:22.023 "name": "BaseBdev2", 00:11:22.023 "uuid": "b09c8901-1369-4a50-b998-c1fe5fe6f68f", 00:11:22.023 "is_configured": true, 00:11:22.023 "data_offset": 0, 00:11:22.023 "data_size": 65536 00:11:22.023 }, 00:11:22.023 { 00:11:22.023 "name": "BaseBdev3", 00:11:22.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.023 "is_configured": false, 00:11:22.023 "data_offset": 0, 00:11:22.023 "data_size": 0 00:11:22.023 }, 00:11:22.023 { 00:11:22.023 "name": "BaseBdev4", 00:11:22.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.023 "is_configured": false, 00:11:22.023 "data_offset": 0, 00:11:22.023 "data_size": 0 00:11:22.023 } 00:11:22.023 ] 00:11:22.023 }' 00:11:22.023 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.023 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.283 [2024-10-21 09:55:58.844776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.283 BaseBdev3 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.283 [ 00:11:22.283 { 00:11:22.283 "name": "BaseBdev3", 00:11:22.283 "aliases": [ 00:11:22.283 "7c753bcf-a670-40ef-9a83-37793a4ff08f" 00:11:22.283 ], 00:11:22.283 "product_name": "Malloc disk", 00:11:22.283 "block_size": 512, 00:11:22.283 "num_blocks": 65536, 00:11:22.283 "uuid": "7c753bcf-a670-40ef-9a83-37793a4ff08f", 00:11:22.283 "assigned_rate_limits": { 00:11:22.283 "rw_ios_per_sec": 0, 00:11:22.283 "rw_mbytes_per_sec": 0, 00:11:22.283 "r_mbytes_per_sec": 0, 00:11:22.283 "w_mbytes_per_sec": 0 00:11:22.283 }, 00:11:22.283 "claimed": true, 00:11:22.283 "claim_type": "exclusive_write", 00:11:22.283 "zoned": false, 00:11:22.283 "supported_io_types": { 00:11:22.283 "read": true, 00:11:22.283 "write": true, 00:11:22.283 "unmap": true, 00:11:22.283 "flush": true, 00:11:22.283 "reset": true, 00:11:22.283 "nvme_admin": false, 00:11:22.283 "nvme_io": false, 00:11:22.283 "nvme_io_md": false, 00:11:22.283 "write_zeroes": true, 00:11:22.283 "zcopy": true, 00:11:22.283 "get_zone_info": false, 00:11:22.283 "zone_management": false, 00:11:22.283 "zone_append": false, 00:11:22.283 "compare": false, 00:11:22.283 "compare_and_write": false, 00:11:22.283 "abort": true, 00:11:22.283 "seek_hole": false, 00:11:22.283 "seek_data": false, 00:11:22.283 "copy": true, 00:11:22.283 "nvme_iov_md": false 00:11:22.283 }, 00:11:22.283 "memory_domains": [ 00:11:22.283 { 00:11:22.283 "dma_device_id": "system", 00:11:22.283 "dma_device_type": 1 00:11:22.283 }, 00:11:22.283 { 00:11:22.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.283 "dma_device_type": 2 00:11:22.283 } 00:11:22.283 ], 00:11:22.283 "driver_specific": {} 00:11:22.283 } 00:11:22.283 ] 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.283 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.543 "name": "Existed_Raid", 00:11:22.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.543 "strip_size_kb": 64, 00:11:22.543 "state": "configuring", 00:11:22.543 "raid_level": "concat", 00:11:22.543 "superblock": false, 00:11:22.543 "num_base_bdevs": 4, 00:11:22.543 "num_base_bdevs_discovered": 3, 00:11:22.543 "num_base_bdevs_operational": 4, 00:11:22.543 "base_bdevs_list": [ 00:11:22.543 { 00:11:22.543 "name": "BaseBdev1", 00:11:22.543 "uuid": "56bb28ef-bcf3-45a1-8de4-bf37813685a5", 00:11:22.543 "is_configured": true, 00:11:22.543 "data_offset": 0, 00:11:22.543 "data_size": 65536 00:11:22.543 }, 00:11:22.543 { 00:11:22.543 "name": "BaseBdev2", 00:11:22.543 "uuid": "b09c8901-1369-4a50-b998-c1fe5fe6f68f", 00:11:22.543 "is_configured": true, 00:11:22.543 "data_offset": 0, 00:11:22.543 "data_size": 65536 00:11:22.543 }, 00:11:22.543 { 00:11:22.543 "name": "BaseBdev3", 00:11:22.543 "uuid": "7c753bcf-a670-40ef-9a83-37793a4ff08f", 00:11:22.543 "is_configured": true, 00:11:22.543 "data_offset": 0, 00:11:22.543 "data_size": 65536 00:11:22.543 }, 00:11:22.543 { 00:11:22.543 "name": "BaseBdev4", 00:11:22.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.543 "is_configured": false, 00:11:22.543 "data_offset": 0, 00:11:22.543 "data_size": 0 00:11:22.543 } 00:11:22.543 ] 00:11:22.543 }' 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.543 09:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.803 [2024-10-21 09:55:59.379515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:22.803 [2024-10-21 09:55:59.379657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:22.803 [2024-10-21 09:55:59.379686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:22.803 [2024-10-21 09:55:59.380033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:22.803 [2024-10-21 09:55:59.380290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:22.803 [2024-10-21 09:55:59.380345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:11:22.803 [2024-10-21 09:55:59.380705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.803 BaseBdev4 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.803 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.064 [ 00:11:23.064 { 00:11:23.064 "name": "BaseBdev4", 00:11:23.064 "aliases": [ 00:11:23.064 "be9a2321-7840-444d-a736-ba184fb2e736" 00:11:23.064 ], 00:11:23.064 "product_name": "Malloc disk", 00:11:23.064 "block_size": 512, 00:11:23.064 "num_blocks": 65536, 00:11:23.064 "uuid": "be9a2321-7840-444d-a736-ba184fb2e736", 00:11:23.064 "assigned_rate_limits": { 00:11:23.064 "rw_ios_per_sec": 0, 00:11:23.064 "rw_mbytes_per_sec": 0, 00:11:23.064 "r_mbytes_per_sec": 0, 00:11:23.064 "w_mbytes_per_sec": 0 00:11:23.064 }, 00:11:23.064 "claimed": true, 00:11:23.064 "claim_type": "exclusive_write", 00:11:23.064 "zoned": false, 00:11:23.064 "supported_io_types": { 00:11:23.064 "read": true, 00:11:23.064 "write": true, 00:11:23.064 "unmap": true, 00:11:23.064 "flush": true, 00:11:23.064 "reset": true, 00:11:23.064 "nvme_admin": false, 00:11:23.064 "nvme_io": false, 00:11:23.064 "nvme_io_md": false, 00:11:23.064 "write_zeroes": true, 00:11:23.064 "zcopy": true, 00:11:23.064 "get_zone_info": false, 00:11:23.064 "zone_management": false, 00:11:23.064 "zone_append": false, 00:11:23.064 "compare": false, 00:11:23.064 "compare_and_write": false, 00:11:23.064 "abort": true, 00:11:23.064 "seek_hole": false, 00:11:23.064 "seek_data": false, 00:11:23.064 "copy": true, 00:11:23.064 "nvme_iov_md": false 00:11:23.064 }, 00:11:23.064 "memory_domains": [ 00:11:23.064 { 00:11:23.064 "dma_device_id": "system", 00:11:23.064 "dma_device_type": 1 00:11:23.064 }, 00:11:23.064 { 00:11:23.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.064 "dma_device_type": 2 00:11:23.064 } 00:11:23.064 ], 00:11:23.064 "driver_specific": {} 00:11:23.064 } 00:11:23.064 ] 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.064 "name": "Existed_Raid", 00:11:23.064 "uuid": "a8e3c54f-536a-47a7-918c-e2ba80aa6694", 00:11:23.064 "strip_size_kb": 64, 00:11:23.064 "state": "online", 00:11:23.064 "raid_level": "concat", 00:11:23.064 "superblock": false, 00:11:23.064 "num_base_bdevs": 4, 00:11:23.064 "num_base_bdevs_discovered": 4, 00:11:23.064 "num_base_bdevs_operational": 4, 00:11:23.064 "base_bdevs_list": [ 00:11:23.064 { 00:11:23.064 "name": "BaseBdev1", 00:11:23.064 "uuid": "56bb28ef-bcf3-45a1-8de4-bf37813685a5", 00:11:23.064 "is_configured": true, 00:11:23.064 "data_offset": 0, 00:11:23.064 "data_size": 65536 00:11:23.064 }, 00:11:23.064 { 00:11:23.064 "name": "BaseBdev2", 00:11:23.064 "uuid": "b09c8901-1369-4a50-b998-c1fe5fe6f68f", 00:11:23.064 "is_configured": true, 00:11:23.064 "data_offset": 0, 00:11:23.064 "data_size": 65536 00:11:23.064 }, 00:11:23.064 { 00:11:23.064 "name": "BaseBdev3", 00:11:23.064 "uuid": "7c753bcf-a670-40ef-9a83-37793a4ff08f", 00:11:23.064 "is_configured": true, 00:11:23.064 "data_offset": 0, 00:11:23.064 "data_size": 65536 00:11:23.064 }, 00:11:23.064 { 00:11:23.064 "name": "BaseBdev4", 00:11:23.064 "uuid": "be9a2321-7840-444d-a736-ba184fb2e736", 00:11:23.064 "is_configured": true, 00:11:23.064 "data_offset": 0, 00:11:23.064 "data_size": 65536 00:11:23.064 } 00:11:23.064 ] 00:11:23.064 }' 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.064 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.324 [2024-10-21 09:55:59.887115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.324 09:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.584 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.584 "name": "Existed_Raid", 00:11:23.584 "aliases": [ 00:11:23.584 "a8e3c54f-536a-47a7-918c-e2ba80aa6694" 00:11:23.584 ], 00:11:23.584 "product_name": "Raid Volume", 00:11:23.584 "block_size": 512, 00:11:23.584 "num_blocks": 262144, 00:11:23.584 "uuid": "a8e3c54f-536a-47a7-918c-e2ba80aa6694", 00:11:23.584 "assigned_rate_limits": { 00:11:23.584 "rw_ios_per_sec": 0, 00:11:23.584 "rw_mbytes_per_sec": 0, 00:11:23.584 "r_mbytes_per_sec": 0, 00:11:23.584 "w_mbytes_per_sec": 0 00:11:23.584 }, 00:11:23.584 "claimed": false, 00:11:23.584 "zoned": false, 00:11:23.584 "supported_io_types": { 00:11:23.584 "read": true, 00:11:23.584 "write": true, 00:11:23.584 "unmap": true, 00:11:23.584 "flush": true, 00:11:23.584 "reset": true, 00:11:23.584 "nvme_admin": false, 00:11:23.584 "nvme_io": false, 00:11:23.584 "nvme_io_md": false, 00:11:23.584 "write_zeroes": true, 00:11:23.584 "zcopy": false, 00:11:23.584 "get_zone_info": false, 00:11:23.584 "zone_management": false, 00:11:23.584 "zone_append": false, 00:11:23.584 "compare": false, 00:11:23.584 "compare_and_write": false, 00:11:23.584 "abort": false, 00:11:23.584 "seek_hole": false, 00:11:23.584 "seek_data": false, 00:11:23.584 "copy": false, 00:11:23.584 "nvme_iov_md": false 00:11:23.584 }, 00:11:23.584 "memory_domains": [ 00:11:23.585 { 00:11:23.585 "dma_device_id": "system", 00:11:23.585 "dma_device_type": 1 00:11:23.585 }, 00:11:23.585 { 00:11:23.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.585 "dma_device_type": 2 00:11:23.585 }, 00:11:23.585 { 00:11:23.585 "dma_device_id": "system", 00:11:23.585 "dma_device_type": 1 00:11:23.585 }, 00:11:23.585 { 00:11:23.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.585 "dma_device_type": 2 00:11:23.585 }, 00:11:23.585 { 00:11:23.585 "dma_device_id": "system", 00:11:23.585 "dma_device_type": 1 00:11:23.585 }, 00:11:23.585 { 00:11:23.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.585 "dma_device_type": 2 00:11:23.585 }, 00:11:23.585 { 00:11:23.585 "dma_device_id": "system", 00:11:23.585 "dma_device_type": 1 00:11:23.585 }, 00:11:23.585 { 00:11:23.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.585 "dma_device_type": 2 00:11:23.585 } 00:11:23.585 ], 00:11:23.585 "driver_specific": { 00:11:23.585 "raid": { 00:11:23.585 "uuid": "a8e3c54f-536a-47a7-918c-e2ba80aa6694", 00:11:23.585 "strip_size_kb": 64, 00:11:23.585 "state": "online", 00:11:23.585 "raid_level": "concat", 00:11:23.585 "superblock": false, 00:11:23.585 "num_base_bdevs": 4, 00:11:23.585 "num_base_bdevs_discovered": 4, 00:11:23.585 "num_base_bdevs_operational": 4, 00:11:23.585 "base_bdevs_list": [ 00:11:23.585 { 00:11:23.585 "name": "BaseBdev1", 00:11:23.585 "uuid": "56bb28ef-bcf3-45a1-8de4-bf37813685a5", 00:11:23.585 "is_configured": true, 00:11:23.585 "data_offset": 0, 00:11:23.585 "data_size": 65536 00:11:23.585 }, 00:11:23.585 { 00:11:23.585 "name": "BaseBdev2", 00:11:23.585 "uuid": "b09c8901-1369-4a50-b998-c1fe5fe6f68f", 00:11:23.585 "is_configured": true, 00:11:23.585 "data_offset": 0, 00:11:23.585 "data_size": 65536 00:11:23.585 }, 00:11:23.585 { 00:11:23.585 "name": "BaseBdev3", 00:11:23.585 "uuid": "7c753bcf-a670-40ef-9a83-37793a4ff08f", 00:11:23.585 "is_configured": true, 00:11:23.585 "data_offset": 0, 00:11:23.585 "data_size": 65536 00:11:23.585 }, 00:11:23.585 { 00:11:23.585 "name": "BaseBdev4", 00:11:23.585 "uuid": "be9a2321-7840-444d-a736-ba184fb2e736", 00:11:23.585 "is_configured": true, 00:11:23.585 "data_offset": 0, 00:11:23.585 "data_size": 65536 00:11:23.585 } 00:11:23.585 ] 00:11:23.585 } 00:11:23.585 } 00:11:23.585 }' 00:11:23.585 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.585 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:23.585 BaseBdev2 00:11:23.585 BaseBdev3 00:11:23.585 BaseBdev4' 00:11:23.585 09:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.585 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.845 [2024-10-21 09:56:00.222226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.845 [2024-10-21 09:56:00.222258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.845 [2024-10-21 09:56:00.222311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.845 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.846 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.846 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.846 "name": "Existed_Raid", 00:11:23.846 "uuid": "a8e3c54f-536a-47a7-918c-e2ba80aa6694", 00:11:23.846 "strip_size_kb": 64, 00:11:23.846 "state": "offline", 00:11:23.846 "raid_level": "concat", 00:11:23.846 "superblock": false, 00:11:23.846 "num_base_bdevs": 4, 00:11:23.846 "num_base_bdevs_discovered": 3, 00:11:23.846 "num_base_bdevs_operational": 3, 00:11:23.846 "base_bdevs_list": [ 00:11:23.846 { 00:11:23.846 "name": null, 00:11:23.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.846 "is_configured": false, 00:11:23.846 "data_offset": 0, 00:11:23.846 "data_size": 65536 00:11:23.846 }, 00:11:23.846 { 00:11:23.846 "name": "BaseBdev2", 00:11:23.846 "uuid": "b09c8901-1369-4a50-b998-c1fe5fe6f68f", 00:11:23.846 "is_configured": true, 00:11:23.846 "data_offset": 0, 00:11:23.846 "data_size": 65536 00:11:23.846 }, 00:11:23.846 { 00:11:23.846 "name": "BaseBdev3", 00:11:23.846 "uuid": "7c753bcf-a670-40ef-9a83-37793a4ff08f", 00:11:23.846 "is_configured": true, 00:11:23.846 "data_offset": 0, 00:11:23.846 "data_size": 65536 00:11:23.846 }, 00:11:23.846 { 00:11:23.846 "name": "BaseBdev4", 00:11:23.846 "uuid": "be9a2321-7840-444d-a736-ba184fb2e736", 00:11:23.846 "is_configured": true, 00:11:23.846 "data_offset": 0, 00:11:23.846 "data_size": 65536 00:11:23.846 } 00:11:23.846 ] 00:11:23.846 }' 00:11:23.846 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.846 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.416 [2024-10-21 09:56:00.853471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.416 09:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.689 [2024-10-21 09:56:01.020699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.689 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.689 [2024-10-21 09:56:01.192233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:24.689 [2024-10-21 09:56:01.192333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 BaseBdev2 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 [ 00:11:24.949 { 00:11:24.949 "name": "BaseBdev2", 00:11:24.949 "aliases": [ 00:11:24.949 "43085f41-31cf-446e-b1bb-914b9bd5751f" 00:11:24.949 ], 00:11:24.949 "product_name": "Malloc disk", 00:11:24.949 "block_size": 512, 00:11:24.949 "num_blocks": 65536, 00:11:24.949 "uuid": "43085f41-31cf-446e-b1bb-914b9bd5751f", 00:11:24.949 "assigned_rate_limits": { 00:11:24.949 "rw_ios_per_sec": 0, 00:11:24.949 "rw_mbytes_per_sec": 0, 00:11:24.949 "r_mbytes_per_sec": 0, 00:11:24.949 "w_mbytes_per_sec": 0 00:11:24.949 }, 00:11:24.949 "claimed": false, 00:11:24.949 "zoned": false, 00:11:24.949 "supported_io_types": { 00:11:24.949 "read": true, 00:11:24.949 "write": true, 00:11:24.949 "unmap": true, 00:11:24.949 "flush": true, 00:11:24.949 "reset": true, 00:11:24.949 "nvme_admin": false, 00:11:24.949 "nvme_io": false, 00:11:24.949 "nvme_io_md": false, 00:11:24.949 "write_zeroes": true, 00:11:24.949 "zcopy": true, 00:11:24.949 "get_zone_info": false, 00:11:24.949 "zone_management": false, 00:11:24.949 "zone_append": false, 00:11:24.949 "compare": false, 00:11:24.949 "compare_and_write": false, 00:11:24.949 "abort": true, 00:11:24.949 "seek_hole": false, 00:11:24.949 "seek_data": false, 00:11:24.949 "copy": true, 00:11:24.949 "nvme_iov_md": false 00:11:24.949 }, 00:11:24.949 "memory_domains": [ 00:11:24.949 { 00:11:24.949 "dma_device_id": "system", 00:11:24.949 "dma_device_type": 1 00:11:24.949 }, 00:11:24.949 { 00:11:24.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.949 "dma_device_type": 2 00:11:24.949 } 00:11:24.949 ], 00:11:24.949 "driver_specific": {} 00:11:24.949 } 00:11:24.949 ] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 BaseBdev3 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.949 [ 00:11:24.949 { 00:11:24.949 "name": "BaseBdev3", 00:11:24.949 "aliases": [ 00:11:24.949 "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47" 00:11:24.949 ], 00:11:24.949 "product_name": "Malloc disk", 00:11:24.949 "block_size": 512, 00:11:24.949 "num_blocks": 65536, 00:11:24.949 "uuid": "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47", 00:11:24.949 "assigned_rate_limits": { 00:11:24.949 "rw_ios_per_sec": 0, 00:11:24.949 "rw_mbytes_per_sec": 0, 00:11:24.949 "r_mbytes_per_sec": 0, 00:11:24.949 "w_mbytes_per_sec": 0 00:11:24.949 }, 00:11:24.949 "claimed": false, 00:11:24.949 "zoned": false, 00:11:24.949 "supported_io_types": { 00:11:24.949 "read": true, 00:11:24.949 "write": true, 00:11:24.949 "unmap": true, 00:11:24.949 "flush": true, 00:11:24.949 "reset": true, 00:11:24.949 "nvme_admin": false, 00:11:24.949 "nvme_io": false, 00:11:24.949 "nvme_io_md": false, 00:11:24.949 "write_zeroes": true, 00:11:24.949 "zcopy": true, 00:11:24.949 "get_zone_info": false, 00:11:24.949 "zone_management": false, 00:11:24.949 "zone_append": false, 00:11:24.949 "compare": false, 00:11:24.949 "compare_and_write": false, 00:11:24.949 "abort": true, 00:11:24.949 "seek_hole": false, 00:11:24.949 "seek_data": false, 00:11:24.949 "copy": true, 00:11:24.949 "nvme_iov_md": false 00:11:24.949 }, 00:11:24.949 "memory_domains": [ 00:11:24.949 { 00:11:24.949 "dma_device_id": "system", 00:11:24.949 "dma_device_type": 1 00:11:24.949 }, 00:11:24.949 { 00:11:24.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.949 "dma_device_type": 2 00:11:24.949 } 00:11:24.949 ], 00:11:24.949 "driver_specific": {} 00:11:24.949 } 00:11:24.949 ] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.949 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.208 BaseBdev4 00:11:25.208 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.208 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:25.208 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:25.208 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:25.208 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:25.208 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:25.208 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:25.208 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.209 [ 00:11:25.209 { 00:11:25.209 "name": "BaseBdev4", 00:11:25.209 "aliases": [ 00:11:25.209 "a563877e-7110-47e5-b12a-02e5a8f5f266" 00:11:25.209 ], 00:11:25.209 "product_name": "Malloc disk", 00:11:25.209 "block_size": 512, 00:11:25.209 "num_blocks": 65536, 00:11:25.209 "uuid": "a563877e-7110-47e5-b12a-02e5a8f5f266", 00:11:25.209 "assigned_rate_limits": { 00:11:25.209 "rw_ios_per_sec": 0, 00:11:25.209 "rw_mbytes_per_sec": 0, 00:11:25.209 "r_mbytes_per_sec": 0, 00:11:25.209 "w_mbytes_per_sec": 0 00:11:25.209 }, 00:11:25.209 "claimed": false, 00:11:25.209 "zoned": false, 00:11:25.209 "supported_io_types": { 00:11:25.209 "read": true, 00:11:25.209 "write": true, 00:11:25.209 "unmap": true, 00:11:25.209 "flush": true, 00:11:25.209 "reset": true, 00:11:25.209 "nvme_admin": false, 00:11:25.209 "nvme_io": false, 00:11:25.209 "nvme_io_md": false, 00:11:25.209 "write_zeroes": true, 00:11:25.209 "zcopy": true, 00:11:25.209 "get_zone_info": false, 00:11:25.209 "zone_management": false, 00:11:25.209 "zone_append": false, 00:11:25.209 "compare": false, 00:11:25.209 "compare_and_write": false, 00:11:25.209 "abort": true, 00:11:25.209 "seek_hole": false, 00:11:25.209 "seek_data": false, 00:11:25.209 "copy": true, 00:11:25.209 "nvme_iov_md": false 00:11:25.209 }, 00:11:25.209 "memory_domains": [ 00:11:25.209 { 00:11:25.209 "dma_device_id": "system", 00:11:25.209 "dma_device_type": 1 00:11:25.209 }, 00:11:25.209 { 00:11:25.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.209 "dma_device_type": 2 00:11:25.209 } 00:11:25.209 ], 00:11:25.209 "driver_specific": {} 00:11:25.209 } 00:11:25.209 ] 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.209 [2024-10-21 09:56:01.640606] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.209 [2024-10-21 09:56:01.640689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.209 [2024-10-21 09:56:01.640718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.209 [2024-10-21 09:56:01.642996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.209 [2024-10-21 09:56:01.643066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.209 "name": "Existed_Raid", 00:11:25.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.209 "strip_size_kb": 64, 00:11:25.209 "state": "configuring", 00:11:25.209 "raid_level": "concat", 00:11:25.209 "superblock": false, 00:11:25.209 "num_base_bdevs": 4, 00:11:25.209 "num_base_bdevs_discovered": 3, 00:11:25.209 "num_base_bdevs_operational": 4, 00:11:25.209 "base_bdevs_list": [ 00:11:25.209 { 00:11:25.209 "name": "BaseBdev1", 00:11:25.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.209 "is_configured": false, 00:11:25.209 "data_offset": 0, 00:11:25.209 "data_size": 0 00:11:25.209 }, 00:11:25.209 { 00:11:25.209 "name": "BaseBdev2", 00:11:25.209 "uuid": "43085f41-31cf-446e-b1bb-914b9bd5751f", 00:11:25.209 "is_configured": true, 00:11:25.209 "data_offset": 0, 00:11:25.209 "data_size": 65536 00:11:25.209 }, 00:11:25.209 { 00:11:25.209 "name": "BaseBdev3", 00:11:25.209 "uuid": "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47", 00:11:25.209 "is_configured": true, 00:11:25.209 "data_offset": 0, 00:11:25.209 "data_size": 65536 00:11:25.209 }, 00:11:25.209 { 00:11:25.209 "name": "BaseBdev4", 00:11:25.209 "uuid": "a563877e-7110-47e5-b12a-02e5a8f5f266", 00:11:25.209 "is_configured": true, 00:11:25.209 "data_offset": 0, 00:11:25.209 "data_size": 65536 00:11:25.209 } 00:11:25.209 ] 00:11:25.209 }' 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.209 09:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.778 [2024-10-21 09:56:02.135861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.778 "name": "Existed_Raid", 00:11:25.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.778 "strip_size_kb": 64, 00:11:25.778 "state": "configuring", 00:11:25.778 "raid_level": "concat", 00:11:25.778 "superblock": false, 00:11:25.778 "num_base_bdevs": 4, 00:11:25.778 "num_base_bdevs_discovered": 2, 00:11:25.778 "num_base_bdevs_operational": 4, 00:11:25.778 "base_bdevs_list": [ 00:11:25.778 { 00:11:25.778 "name": "BaseBdev1", 00:11:25.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.778 "is_configured": false, 00:11:25.778 "data_offset": 0, 00:11:25.778 "data_size": 0 00:11:25.778 }, 00:11:25.778 { 00:11:25.778 "name": null, 00:11:25.778 "uuid": "43085f41-31cf-446e-b1bb-914b9bd5751f", 00:11:25.778 "is_configured": false, 00:11:25.778 "data_offset": 0, 00:11:25.778 "data_size": 65536 00:11:25.778 }, 00:11:25.778 { 00:11:25.778 "name": "BaseBdev3", 00:11:25.778 "uuid": "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47", 00:11:25.778 "is_configured": true, 00:11:25.778 "data_offset": 0, 00:11:25.778 "data_size": 65536 00:11:25.778 }, 00:11:25.778 { 00:11:25.778 "name": "BaseBdev4", 00:11:25.778 "uuid": "a563877e-7110-47e5-b12a-02e5a8f5f266", 00:11:25.778 "is_configured": true, 00:11:25.778 "data_offset": 0, 00:11:25.778 "data_size": 65536 00:11:25.778 } 00:11:25.778 ] 00:11:25.778 }' 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.778 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.038 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.038 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:26.038 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.038 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.038 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.297 [2024-10-21 09:56:02.694305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.297 BaseBdev1 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.297 [ 00:11:26.297 { 00:11:26.297 "name": "BaseBdev1", 00:11:26.297 "aliases": [ 00:11:26.297 "044fe596-9da5-4d21-9594-8d1f1610190c" 00:11:26.297 ], 00:11:26.297 "product_name": "Malloc disk", 00:11:26.297 "block_size": 512, 00:11:26.297 "num_blocks": 65536, 00:11:26.297 "uuid": "044fe596-9da5-4d21-9594-8d1f1610190c", 00:11:26.297 "assigned_rate_limits": { 00:11:26.297 "rw_ios_per_sec": 0, 00:11:26.297 "rw_mbytes_per_sec": 0, 00:11:26.297 "r_mbytes_per_sec": 0, 00:11:26.297 "w_mbytes_per_sec": 0 00:11:26.297 }, 00:11:26.297 "claimed": true, 00:11:26.297 "claim_type": "exclusive_write", 00:11:26.297 "zoned": false, 00:11:26.297 "supported_io_types": { 00:11:26.297 "read": true, 00:11:26.297 "write": true, 00:11:26.297 "unmap": true, 00:11:26.297 "flush": true, 00:11:26.297 "reset": true, 00:11:26.297 "nvme_admin": false, 00:11:26.297 "nvme_io": false, 00:11:26.297 "nvme_io_md": false, 00:11:26.297 "write_zeroes": true, 00:11:26.297 "zcopy": true, 00:11:26.297 "get_zone_info": false, 00:11:26.297 "zone_management": false, 00:11:26.297 "zone_append": false, 00:11:26.297 "compare": false, 00:11:26.297 "compare_and_write": false, 00:11:26.297 "abort": true, 00:11:26.297 "seek_hole": false, 00:11:26.297 "seek_data": false, 00:11:26.297 "copy": true, 00:11:26.297 "nvme_iov_md": false 00:11:26.297 }, 00:11:26.297 "memory_domains": [ 00:11:26.297 { 00:11:26.297 "dma_device_id": "system", 00:11:26.297 "dma_device_type": 1 00:11:26.297 }, 00:11:26.297 { 00:11:26.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.297 "dma_device_type": 2 00:11:26.297 } 00:11:26.297 ], 00:11:26.297 "driver_specific": {} 00:11:26.297 } 00:11:26.297 ] 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.297 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.297 "name": "Existed_Raid", 00:11:26.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.297 "strip_size_kb": 64, 00:11:26.297 "state": "configuring", 00:11:26.297 "raid_level": "concat", 00:11:26.297 "superblock": false, 00:11:26.297 "num_base_bdevs": 4, 00:11:26.297 "num_base_bdevs_discovered": 3, 00:11:26.297 "num_base_bdevs_operational": 4, 00:11:26.297 "base_bdevs_list": [ 00:11:26.297 { 00:11:26.297 "name": "BaseBdev1", 00:11:26.297 "uuid": "044fe596-9da5-4d21-9594-8d1f1610190c", 00:11:26.297 "is_configured": true, 00:11:26.297 "data_offset": 0, 00:11:26.297 "data_size": 65536 00:11:26.297 }, 00:11:26.297 { 00:11:26.297 "name": null, 00:11:26.298 "uuid": "43085f41-31cf-446e-b1bb-914b9bd5751f", 00:11:26.298 "is_configured": false, 00:11:26.298 "data_offset": 0, 00:11:26.298 "data_size": 65536 00:11:26.298 }, 00:11:26.298 { 00:11:26.298 "name": "BaseBdev3", 00:11:26.298 "uuid": "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47", 00:11:26.298 "is_configured": true, 00:11:26.298 "data_offset": 0, 00:11:26.298 "data_size": 65536 00:11:26.298 }, 00:11:26.298 { 00:11:26.298 "name": "BaseBdev4", 00:11:26.298 "uuid": "a563877e-7110-47e5-b12a-02e5a8f5f266", 00:11:26.298 "is_configured": true, 00:11:26.298 "data_offset": 0, 00:11:26.298 "data_size": 65536 00:11:26.298 } 00:11:26.298 ] 00:11:26.298 }' 00:11:26.298 09:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.298 09:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.935 [2024-10-21 09:56:03.285484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.935 "name": "Existed_Raid", 00:11:26.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.935 "strip_size_kb": 64, 00:11:26.935 "state": "configuring", 00:11:26.935 "raid_level": "concat", 00:11:26.935 "superblock": false, 00:11:26.935 "num_base_bdevs": 4, 00:11:26.935 "num_base_bdevs_discovered": 2, 00:11:26.935 "num_base_bdevs_operational": 4, 00:11:26.935 "base_bdevs_list": [ 00:11:26.935 { 00:11:26.935 "name": "BaseBdev1", 00:11:26.935 "uuid": "044fe596-9da5-4d21-9594-8d1f1610190c", 00:11:26.935 "is_configured": true, 00:11:26.935 "data_offset": 0, 00:11:26.935 "data_size": 65536 00:11:26.935 }, 00:11:26.935 { 00:11:26.935 "name": null, 00:11:26.935 "uuid": "43085f41-31cf-446e-b1bb-914b9bd5751f", 00:11:26.935 "is_configured": false, 00:11:26.935 "data_offset": 0, 00:11:26.935 "data_size": 65536 00:11:26.935 }, 00:11:26.935 { 00:11:26.935 "name": null, 00:11:26.935 "uuid": "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47", 00:11:26.935 "is_configured": false, 00:11:26.935 "data_offset": 0, 00:11:26.935 "data_size": 65536 00:11:26.935 }, 00:11:26.935 { 00:11:26.935 "name": "BaseBdev4", 00:11:26.935 "uuid": "a563877e-7110-47e5-b12a-02e5a8f5f266", 00:11:26.935 "is_configured": true, 00:11:26.935 "data_offset": 0, 00:11:26.935 "data_size": 65536 00:11:26.935 } 00:11:26.935 ] 00:11:26.935 }' 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.935 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.195 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.195 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.195 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.195 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.195 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.454 [2024-10-21 09:56:03.800609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.454 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.455 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.455 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.455 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.455 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.455 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.455 "name": "Existed_Raid", 00:11:27.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.455 "strip_size_kb": 64, 00:11:27.455 "state": "configuring", 00:11:27.455 "raid_level": "concat", 00:11:27.455 "superblock": false, 00:11:27.455 "num_base_bdevs": 4, 00:11:27.455 "num_base_bdevs_discovered": 3, 00:11:27.455 "num_base_bdevs_operational": 4, 00:11:27.455 "base_bdevs_list": [ 00:11:27.455 { 00:11:27.455 "name": "BaseBdev1", 00:11:27.455 "uuid": "044fe596-9da5-4d21-9594-8d1f1610190c", 00:11:27.455 "is_configured": true, 00:11:27.455 "data_offset": 0, 00:11:27.455 "data_size": 65536 00:11:27.455 }, 00:11:27.455 { 00:11:27.455 "name": null, 00:11:27.455 "uuid": "43085f41-31cf-446e-b1bb-914b9bd5751f", 00:11:27.455 "is_configured": false, 00:11:27.455 "data_offset": 0, 00:11:27.455 "data_size": 65536 00:11:27.455 }, 00:11:27.455 { 00:11:27.455 "name": "BaseBdev3", 00:11:27.455 "uuid": "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47", 00:11:27.455 "is_configured": true, 00:11:27.455 "data_offset": 0, 00:11:27.455 "data_size": 65536 00:11:27.455 }, 00:11:27.455 { 00:11:27.455 "name": "BaseBdev4", 00:11:27.455 "uuid": "a563877e-7110-47e5-b12a-02e5a8f5f266", 00:11:27.455 "is_configured": true, 00:11:27.455 "data_offset": 0, 00:11:27.455 "data_size": 65536 00:11:27.455 } 00:11:27.455 ] 00:11:27.455 }' 00:11:27.455 09:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.455 09:56:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.714 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.714 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.714 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.714 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.715 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.974 [2024-10-21 09:56:04.339881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.974 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.974 "name": "Existed_Raid", 00:11:27.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.974 "strip_size_kb": 64, 00:11:27.975 "state": "configuring", 00:11:27.975 "raid_level": "concat", 00:11:27.975 "superblock": false, 00:11:27.975 "num_base_bdevs": 4, 00:11:27.975 "num_base_bdevs_discovered": 2, 00:11:27.975 "num_base_bdevs_operational": 4, 00:11:27.975 "base_bdevs_list": [ 00:11:27.975 { 00:11:27.975 "name": null, 00:11:27.975 "uuid": "044fe596-9da5-4d21-9594-8d1f1610190c", 00:11:27.975 "is_configured": false, 00:11:27.975 "data_offset": 0, 00:11:27.975 "data_size": 65536 00:11:27.975 }, 00:11:27.975 { 00:11:27.975 "name": null, 00:11:27.975 "uuid": "43085f41-31cf-446e-b1bb-914b9bd5751f", 00:11:27.975 "is_configured": false, 00:11:27.975 "data_offset": 0, 00:11:27.975 "data_size": 65536 00:11:27.975 }, 00:11:27.975 { 00:11:27.975 "name": "BaseBdev3", 00:11:27.975 "uuid": "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47", 00:11:27.975 "is_configured": true, 00:11:27.975 "data_offset": 0, 00:11:27.975 "data_size": 65536 00:11:27.975 }, 00:11:27.975 { 00:11:27.975 "name": "BaseBdev4", 00:11:27.975 "uuid": "a563877e-7110-47e5-b12a-02e5a8f5f266", 00:11:27.975 "is_configured": true, 00:11:27.975 "data_offset": 0, 00:11:27.975 "data_size": 65536 00:11:27.975 } 00:11:27.975 ] 00:11:27.975 }' 00:11:27.975 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.975 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 [2024-10-21 09:56:04.973360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.545 09:56:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.545 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.545 "name": "Existed_Raid", 00:11:28.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.545 "strip_size_kb": 64, 00:11:28.545 "state": "configuring", 00:11:28.545 "raid_level": "concat", 00:11:28.545 "superblock": false, 00:11:28.545 "num_base_bdevs": 4, 00:11:28.545 "num_base_bdevs_discovered": 3, 00:11:28.545 "num_base_bdevs_operational": 4, 00:11:28.545 "base_bdevs_list": [ 00:11:28.545 { 00:11:28.545 "name": null, 00:11:28.545 "uuid": "044fe596-9da5-4d21-9594-8d1f1610190c", 00:11:28.545 "is_configured": false, 00:11:28.545 "data_offset": 0, 00:11:28.545 "data_size": 65536 00:11:28.545 }, 00:11:28.545 { 00:11:28.545 "name": "BaseBdev2", 00:11:28.545 "uuid": "43085f41-31cf-446e-b1bb-914b9bd5751f", 00:11:28.545 "is_configured": true, 00:11:28.545 "data_offset": 0, 00:11:28.545 "data_size": 65536 00:11:28.545 }, 00:11:28.545 { 00:11:28.545 "name": "BaseBdev3", 00:11:28.545 "uuid": "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47", 00:11:28.545 "is_configured": true, 00:11:28.545 "data_offset": 0, 00:11:28.545 "data_size": 65536 00:11:28.545 }, 00:11:28.545 { 00:11:28.545 "name": "BaseBdev4", 00:11:28.545 "uuid": "a563877e-7110-47e5-b12a-02e5a8f5f266", 00:11:28.545 "is_configured": true, 00:11:28.545 "data_offset": 0, 00:11:28.545 "data_size": 65536 00:11:28.545 } 00:11:28.545 ] 00:11:28.545 }' 00:11:28.545 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.545 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 044fe596-9da5-4d21-9594-8d1f1610190c 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 [2024-10-21 09:56:05.560111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:29.115 [2024-10-21 09:56:05.560195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:29.115 [2024-10-21 09:56:05.560205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:29.115 [2024-10-21 09:56:05.560522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:29.115 [2024-10-21 09:56:05.560747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:29.115 [2024-10-21 09:56:05.560774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:11:29.115 [2024-10-21 09:56:05.561093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.115 NewBaseBdev 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 [ 00:11:29.115 { 00:11:29.115 "name": "NewBaseBdev", 00:11:29.115 "aliases": [ 00:11:29.115 "044fe596-9da5-4d21-9594-8d1f1610190c" 00:11:29.115 ], 00:11:29.115 "product_name": "Malloc disk", 00:11:29.115 "block_size": 512, 00:11:29.115 "num_blocks": 65536, 00:11:29.115 "uuid": "044fe596-9da5-4d21-9594-8d1f1610190c", 00:11:29.115 "assigned_rate_limits": { 00:11:29.115 "rw_ios_per_sec": 0, 00:11:29.115 "rw_mbytes_per_sec": 0, 00:11:29.115 "r_mbytes_per_sec": 0, 00:11:29.115 "w_mbytes_per_sec": 0 00:11:29.115 }, 00:11:29.115 "claimed": true, 00:11:29.115 "claim_type": "exclusive_write", 00:11:29.115 "zoned": false, 00:11:29.115 "supported_io_types": { 00:11:29.115 "read": true, 00:11:29.115 "write": true, 00:11:29.115 "unmap": true, 00:11:29.115 "flush": true, 00:11:29.115 "reset": true, 00:11:29.115 "nvme_admin": false, 00:11:29.115 "nvme_io": false, 00:11:29.115 "nvme_io_md": false, 00:11:29.115 "write_zeroes": true, 00:11:29.115 "zcopy": true, 00:11:29.115 "get_zone_info": false, 00:11:29.115 "zone_management": false, 00:11:29.115 "zone_append": false, 00:11:29.115 "compare": false, 00:11:29.115 "compare_and_write": false, 00:11:29.115 "abort": true, 00:11:29.115 "seek_hole": false, 00:11:29.115 "seek_data": false, 00:11:29.115 "copy": true, 00:11:29.115 "nvme_iov_md": false 00:11:29.115 }, 00:11:29.115 "memory_domains": [ 00:11:29.115 { 00:11:29.115 "dma_device_id": "system", 00:11:29.115 "dma_device_type": 1 00:11:29.115 }, 00:11:29.115 { 00:11:29.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.115 "dma_device_type": 2 00:11:29.115 } 00:11:29.115 ], 00:11:29.115 "driver_specific": {} 00:11:29.115 } 00:11:29.115 ] 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.115 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.115 "name": "Existed_Raid", 00:11:29.116 "uuid": "0a286fd8-5e2c-4737-925d-3548d7a903c7", 00:11:29.116 "strip_size_kb": 64, 00:11:29.116 "state": "online", 00:11:29.116 "raid_level": "concat", 00:11:29.116 "superblock": false, 00:11:29.116 "num_base_bdevs": 4, 00:11:29.116 "num_base_bdevs_discovered": 4, 00:11:29.116 "num_base_bdevs_operational": 4, 00:11:29.116 "base_bdevs_list": [ 00:11:29.116 { 00:11:29.116 "name": "NewBaseBdev", 00:11:29.116 "uuid": "044fe596-9da5-4d21-9594-8d1f1610190c", 00:11:29.116 "is_configured": true, 00:11:29.116 "data_offset": 0, 00:11:29.116 "data_size": 65536 00:11:29.116 }, 00:11:29.116 { 00:11:29.116 "name": "BaseBdev2", 00:11:29.116 "uuid": "43085f41-31cf-446e-b1bb-914b9bd5751f", 00:11:29.116 "is_configured": true, 00:11:29.116 "data_offset": 0, 00:11:29.116 "data_size": 65536 00:11:29.116 }, 00:11:29.116 { 00:11:29.116 "name": "BaseBdev3", 00:11:29.116 "uuid": "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47", 00:11:29.116 "is_configured": true, 00:11:29.116 "data_offset": 0, 00:11:29.116 "data_size": 65536 00:11:29.116 }, 00:11:29.116 { 00:11:29.116 "name": "BaseBdev4", 00:11:29.116 "uuid": "a563877e-7110-47e5-b12a-02e5a8f5f266", 00:11:29.116 "is_configured": true, 00:11:29.116 "data_offset": 0, 00:11:29.116 "data_size": 65536 00:11:29.116 } 00:11:29.116 ] 00:11:29.116 }' 00:11:29.116 09:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.116 09:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.685 [2024-10-21 09:56:06.071817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.685 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.685 "name": "Existed_Raid", 00:11:29.685 "aliases": [ 00:11:29.685 "0a286fd8-5e2c-4737-925d-3548d7a903c7" 00:11:29.685 ], 00:11:29.685 "product_name": "Raid Volume", 00:11:29.685 "block_size": 512, 00:11:29.685 "num_blocks": 262144, 00:11:29.685 "uuid": "0a286fd8-5e2c-4737-925d-3548d7a903c7", 00:11:29.685 "assigned_rate_limits": { 00:11:29.685 "rw_ios_per_sec": 0, 00:11:29.685 "rw_mbytes_per_sec": 0, 00:11:29.685 "r_mbytes_per_sec": 0, 00:11:29.685 "w_mbytes_per_sec": 0 00:11:29.685 }, 00:11:29.685 "claimed": false, 00:11:29.685 "zoned": false, 00:11:29.685 "supported_io_types": { 00:11:29.685 "read": true, 00:11:29.685 "write": true, 00:11:29.685 "unmap": true, 00:11:29.685 "flush": true, 00:11:29.685 "reset": true, 00:11:29.685 "nvme_admin": false, 00:11:29.685 "nvme_io": false, 00:11:29.685 "nvme_io_md": false, 00:11:29.685 "write_zeroes": true, 00:11:29.685 "zcopy": false, 00:11:29.685 "get_zone_info": false, 00:11:29.685 "zone_management": false, 00:11:29.685 "zone_append": false, 00:11:29.685 "compare": false, 00:11:29.685 "compare_and_write": false, 00:11:29.685 "abort": false, 00:11:29.685 "seek_hole": false, 00:11:29.685 "seek_data": false, 00:11:29.685 "copy": false, 00:11:29.685 "nvme_iov_md": false 00:11:29.685 }, 00:11:29.685 "memory_domains": [ 00:11:29.685 { 00:11:29.685 "dma_device_id": "system", 00:11:29.685 "dma_device_type": 1 00:11:29.685 }, 00:11:29.685 { 00:11:29.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.685 "dma_device_type": 2 00:11:29.685 }, 00:11:29.685 { 00:11:29.685 "dma_device_id": "system", 00:11:29.685 "dma_device_type": 1 00:11:29.685 }, 00:11:29.685 { 00:11:29.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.685 "dma_device_type": 2 00:11:29.685 }, 00:11:29.685 { 00:11:29.685 "dma_device_id": "system", 00:11:29.685 "dma_device_type": 1 00:11:29.685 }, 00:11:29.685 { 00:11:29.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.685 "dma_device_type": 2 00:11:29.685 }, 00:11:29.685 { 00:11:29.685 "dma_device_id": "system", 00:11:29.685 "dma_device_type": 1 00:11:29.685 }, 00:11:29.685 { 00:11:29.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.685 "dma_device_type": 2 00:11:29.685 } 00:11:29.685 ], 00:11:29.685 "driver_specific": { 00:11:29.685 "raid": { 00:11:29.685 "uuid": "0a286fd8-5e2c-4737-925d-3548d7a903c7", 00:11:29.685 "strip_size_kb": 64, 00:11:29.685 "state": "online", 00:11:29.685 "raid_level": "concat", 00:11:29.685 "superblock": false, 00:11:29.685 "num_base_bdevs": 4, 00:11:29.685 "num_base_bdevs_discovered": 4, 00:11:29.686 "num_base_bdevs_operational": 4, 00:11:29.686 "base_bdevs_list": [ 00:11:29.686 { 00:11:29.686 "name": "NewBaseBdev", 00:11:29.686 "uuid": "044fe596-9da5-4d21-9594-8d1f1610190c", 00:11:29.686 "is_configured": true, 00:11:29.686 "data_offset": 0, 00:11:29.686 "data_size": 65536 00:11:29.686 }, 00:11:29.686 { 00:11:29.686 "name": "BaseBdev2", 00:11:29.686 "uuid": "43085f41-31cf-446e-b1bb-914b9bd5751f", 00:11:29.686 "is_configured": true, 00:11:29.686 "data_offset": 0, 00:11:29.686 "data_size": 65536 00:11:29.686 }, 00:11:29.686 { 00:11:29.686 "name": "BaseBdev3", 00:11:29.686 "uuid": "c7a5f7ce-d0f1-496b-96d5-f525f0e0eb47", 00:11:29.686 "is_configured": true, 00:11:29.686 "data_offset": 0, 00:11:29.686 "data_size": 65536 00:11:29.686 }, 00:11:29.686 { 00:11:29.686 "name": "BaseBdev4", 00:11:29.686 "uuid": "a563877e-7110-47e5-b12a-02e5a8f5f266", 00:11:29.686 "is_configured": true, 00:11:29.686 "data_offset": 0, 00:11:29.686 "data_size": 65536 00:11:29.686 } 00:11:29.686 ] 00:11:29.686 } 00:11:29.686 } 00:11:29.686 }' 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:29.686 BaseBdev2 00:11:29.686 BaseBdev3 00:11:29.686 BaseBdev4' 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.686 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.946 [2024-10-21 09:56:06.382864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.946 [2024-10-21 09:56:06.382930] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.946 [2024-10-21 09:56:06.383069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.946 [2024-10-21 09:56:06.383177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.946 [2024-10-21 09:56:06.383198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 70850 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 70850 ']' 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 70850 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70850 00:11:29.946 killing process with pid 70850 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70850' 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 70850 00:11:29.946 [2024-10-21 09:56:06.432373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.946 09:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 70850 00:11:30.515 [2024-10-21 09:56:06.950520] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.912 ************************************ 00:11:31.912 END TEST raid_state_function_test 00:11:31.912 ************************************ 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:31.912 00:11:31.912 real 0m12.468s 00:11:31.912 user 0m19.571s 00:11:31.912 sys 0m2.171s 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.912 09:56:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:31.912 09:56:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:31.912 09:56:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.912 09:56:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.912 ************************************ 00:11:31.912 START TEST raid_state_function_test_sb 00:11:31.912 ************************************ 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71538 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:31.912 Process raid pid: 71538 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71538' 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71538 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 71538 ']' 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.912 09:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.913 09:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.913 09:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.913 09:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.173 [2024-10-21 09:56:08.530940] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:32.173 [2024-10-21 09:56:08.531100] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.173 [2024-10-21 09:56:08.701510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.432 [2024-10-21 09:56:08.851507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.692 [2024-10-21 09:56:09.137205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.692 [2024-10-21 09:56:09.137273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.951 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.951 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:32.951 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.951 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.951 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.951 [2024-10-21 09:56:09.377480] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.951 [2024-10-21 09:56:09.377560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.951 [2024-10-21 09:56:09.377583] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.951 [2024-10-21 09:56:09.377597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.951 [2024-10-21 09:56:09.377605] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.951 [2024-10-21 09:56:09.377617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.952 [2024-10-21 09:56:09.377626] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.952 [2024-10-21 09:56:09.377637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.952 "name": "Existed_Raid", 00:11:32.952 "uuid": "5435b7d7-2e88-4150-9d83-1fdc8767164d", 00:11:32.952 "strip_size_kb": 64, 00:11:32.952 "state": "configuring", 00:11:32.952 "raid_level": "concat", 00:11:32.952 "superblock": true, 00:11:32.952 "num_base_bdevs": 4, 00:11:32.952 "num_base_bdevs_discovered": 0, 00:11:32.952 "num_base_bdevs_operational": 4, 00:11:32.952 "base_bdevs_list": [ 00:11:32.952 { 00:11:32.952 "name": "BaseBdev1", 00:11:32.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.952 "is_configured": false, 00:11:32.952 "data_offset": 0, 00:11:32.952 "data_size": 0 00:11:32.952 }, 00:11:32.952 { 00:11:32.952 "name": "BaseBdev2", 00:11:32.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.952 "is_configured": false, 00:11:32.952 "data_offset": 0, 00:11:32.952 "data_size": 0 00:11:32.952 }, 00:11:32.952 { 00:11:32.952 "name": "BaseBdev3", 00:11:32.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.952 "is_configured": false, 00:11:32.952 "data_offset": 0, 00:11:32.952 "data_size": 0 00:11:32.952 }, 00:11:32.952 { 00:11:32.952 "name": "BaseBdev4", 00:11:32.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.952 "is_configured": false, 00:11:32.952 "data_offset": 0, 00:11:32.952 "data_size": 0 00:11:32.952 } 00:11:32.952 ] 00:11:32.952 }' 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.952 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.522 [2024-10-21 09:56:09.812785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.522 [2024-10-21 09:56:09.812868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.522 [2024-10-21 09:56:09.824794] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.522 [2024-10-21 09:56:09.824866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.522 [2024-10-21 09:56:09.824878] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.522 [2024-10-21 09:56:09.824890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.522 [2024-10-21 09:56:09.824899] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.522 [2024-10-21 09:56:09.824912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.522 [2024-10-21 09:56:09.824920] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.522 [2024-10-21 09:56:09.824932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.522 [2024-10-21 09:56:09.886382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.522 BaseBdev1 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.522 [ 00:11:33.522 { 00:11:33.522 "name": "BaseBdev1", 00:11:33.522 "aliases": [ 00:11:33.522 "97942ec8-e986-4fe7-868c-a3387015dba5" 00:11:33.522 ], 00:11:33.522 "product_name": "Malloc disk", 00:11:33.522 "block_size": 512, 00:11:33.522 "num_blocks": 65536, 00:11:33.522 "uuid": "97942ec8-e986-4fe7-868c-a3387015dba5", 00:11:33.522 "assigned_rate_limits": { 00:11:33.522 "rw_ios_per_sec": 0, 00:11:33.522 "rw_mbytes_per_sec": 0, 00:11:33.522 "r_mbytes_per_sec": 0, 00:11:33.522 "w_mbytes_per_sec": 0 00:11:33.522 }, 00:11:33.522 "claimed": true, 00:11:33.522 "claim_type": "exclusive_write", 00:11:33.522 "zoned": false, 00:11:33.522 "supported_io_types": { 00:11:33.522 "read": true, 00:11:33.522 "write": true, 00:11:33.522 "unmap": true, 00:11:33.522 "flush": true, 00:11:33.522 "reset": true, 00:11:33.522 "nvme_admin": false, 00:11:33.522 "nvme_io": false, 00:11:33.522 "nvme_io_md": false, 00:11:33.522 "write_zeroes": true, 00:11:33.522 "zcopy": true, 00:11:33.522 "get_zone_info": false, 00:11:33.522 "zone_management": false, 00:11:33.522 "zone_append": false, 00:11:33.522 "compare": false, 00:11:33.522 "compare_and_write": false, 00:11:33.522 "abort": true, 00:11:33.522 "seek_hole": false, 00:11:33.522 "seek_data": false, 00:11:33.522 "copy": true, 00:11:33.522 "nvme_iov_md": false 00:11:33.522 }, 00:11:33.522 "memory_domains": [ 00:11:33.522 { 00:11:33.522 "dma_device_id": "system", 00:11:33.522 "dma_device_type": 1 00:11:33.522 }, 00:11:33.522 { 00:11:33.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.522 "dma_device_type": 2 00:11:33.522 } 00:11:33.522 ], 00:11:33.522 "driver_specific": {} 00:11:33.522 } 00:11:33.522 ] 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.522 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.522 "name": "Existed_Raid", 00:11:33.522 "uuid": "f24c1ccf-9a54-401d-8d13-c0000e2621f8", 00:11:33.522 "strip_size_kb": 64, 00:11:33.522 "state": "configuring", 00:11:33.522 "raid_level": "concat", 00:11:33.522 "superblock": true, 00:11:33.522 "num_base_bdevs": 4, 00:11:33.522 "num_base_bdevs_discovered": 1, 00:11:33.522 "num_base_bdevs_operational": 4, 00:11:33.522 "base_bdevs_list": [ 00:11:33.522 { 00:11:33.522 "name": "BaseBdev1", 00:11:33.522 "uuid": "97942ec8-e986-4fe7-868c-a3387015dba5", 00:11:33.523 "is_configured": true, 00:11:33.523 "data_offset": 2048, 00:11:33.523 "data_size": 63488 00:11:33.523 }, 00:11:33.523 { 00:11:33.523 "name": "BaseBdev2", 00:11:33.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.523 "is_configured": false, 00:11:33.523 "data_offset": 0, 00:11:33.523 "data_size": 0 00:11:33.523 }, 00:11:33.523 { 00:11:33.523 "name": "BaseBdev3", 00:11:33.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.523 "is_configured": false, 00:11:33.523 "data_offset": 0, 00:11:33.523 "data_size": 0 00:11:33.523 }, 00:11:33.523 { 00:11:33.523 "name": "BaseBdev4", 00:11:33.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.523 "is_configured": false, 00:11:33.523 "data_offset": 0, 00:11:33.523 "data_size": 0 00:11:33.523 } 00:11:33.523 ] 00:11:33.523 }' 00:11:33.523 09:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.523 09:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.781 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.781 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.781 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.781 [2024-10-21 09:56:10.321784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.781 [2024-10-21 09:56:10.321884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:11:33.781 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.781 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.781 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.781 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.781 [2024-10-21 09:56:10.333834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.781 [2024-10-21 09:56:10.336115] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.781 [2024-10-21 09:56:10.336170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.782 [2024-10-21 09:56:10.336181] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.782 [2024-10-21 09:56:10.336194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.782 [2024-10-21 09:56:10.336203] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.782 [2024-10-21 09:56:10.336215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.782 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.042 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.042 "name": "Existed_Raid", 00:11:34.042 "uuid": "233fa0bc-eca7-4002-a8f7-f836ed781907", 00:11:34.042 "strip_size_kb": 64, 00:11:34.042 "state": "configuring", 00:11:34.042 "raid_level": "concat", 00:11:34.042 "superblock": true, 00:11:34.042 "num_base_bdevs": 4, 00:11:34.042 "num_base_bdevs_discovered": 1, 00:11:34.042 "num_base_bdevs_operational": 4, 00:11:34.042 "base_bdevs_list": [ 00:11:34.042 { 00:11:34.042 "name": "BaseBdev1", 00:11:34.042 "uuid": "97942ec8-e986-4fe7-868c-a3387015dba5", 00:11:34.042 "is_configured": true, 00:11:34.042 "data_offset": 2048, 00:11:34.042 "data_size": 63488 00:11:34.042 }, 00:11:34.042 { 00:11:34.042 "name": "BaseBdev2", 00:11:34.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.042 "is_configured": false, 00:11:34.042 "data_offset": 0, 00:11:34.042 "data_size": 0 00:11:34.042 }, 00:11:34.042 { 00:11:34.042 "name": "BaseBdev3", 00:11:34.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.042 "is_configured": false, 00:11:34.042 "data_offset": 0, 00:11:34.042 "data_size": 0 00:11:34.042 }, 00:11:34.042 { 00:11:34.042 "name": "BaseBdev4", 00:11:34.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.042 "is_configured": false, 00:11:34.042 "data_offset": 0, 00:11:34.042 "data_size": 0 00:11:34.042 } 00:11:34.042 ] 00:11:34.042 }' 00:11:34.042 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.042 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.301 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.301 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.301 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.301 [2024-10-21 09:56:10.858329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.301 BaseBdev2 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.302 [ 00:11:34.302 { 00:11:34.302 "name": "BaseBdev2", 00:11:34.302 "aliases": [ 00:11:34.302 "1c01195f-67f2-4cbb-9a45-b5bb84c75a3c" 00:11:34.302 ], 00:11:34.302 "product_name": "Malloc disk", 00:11:34.302 "block_size": 512, 00:11:34.302 "num_blocks": 65536, 00:11:34.302 "uuid": "1c01195f-67f2-4cbb-9a45-b5bb84c75a3c", 00:11:34.302 "assigned_rate_limits": { 00:11:34.302 "rw_ios_per_sec": 0, 00:11:34.302 "rw_mbytes_per_sec": 0, 00:11:34.302 "r_mbytes_per_sec": 0, 00:11:34.302 "w_mbytes_per_sec": 0 00:11:34.302 }, 00:11:34.302 "claimed": true, 00:11:34.302 "claim_type": "exclusive_write", 00:11:34.302 "zoned": false, 00:11:34.302 "supported_io_types": { 00:11:34.302 "read": true, 00:11:34.302 "write": true, 00:11:34.302 "unmap": true, 00:11:34.302 "flush": true, 00:11:34.302 "reset": true, 00:11:34.302 "nvme_admin": false, 00:11:34.302 "nvme_io": false, 00:11:34.302 "nvme_io_md": false, 00:11:34.302 "write_zeroes": true, 00:11:34.302 "zcopy": true, 00:11:34.302 "get_zone_info": false, 00:11:34.302 "zone_management": false, 00:11:34.302 "zone_append": false, 00:11:34.302 "compare": false, 00:11:34.302 "compare_and_write": false, 00:11:34.302 "abort": true, 00:11:34.302 "seek_hole": false, 00:11:34.302 "seek_data": false, 00:11:34.302 "copy": true, 00:11:34.302 "nvme_iov_md": false 00:11:34.302 }, 00:11:34.302 "memory_domains": [ 00:11:34.302 { 00:11:34.302 "dma_device_id": "system", 00:11:34.302 "dma_device_type": 1 00:11:34.302 }, 00:11:34.302 { 00:11:34.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.302 "dma_device_type": 2 00:11:34.302 } 00:11:34.302 ], 00:11:34.302 "driver_specific": {} 00:11:34.302 } 00:11:34.302 ] 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.302 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.561 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.561 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.561 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.561 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.561 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.561 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.561 "name": "Existed_Raid", 00:11:34.561 "uuid": "233fa0bc-eca7-4002-a8f7-f836ed781907", 00:11:34.561 "strip_size_kb": 64, 00:11:34.561 "state": "configuring", 00:11:34.561 "raid_level": "concat", 00:11:34.561 "superblock": true, 00:11:34.561 "num_base_bdevs": 4, 00:11:34.561 "num_base_bdevs_discovered": 2, 00:11:34.561 "num_base_bdevs_operational": 4, 00:11:34.561 "base_bdevs_list": [ 00:11:34.561 { 00:11:34.561 "name": "BaseBdev1", 00:11:34.561 "uuid": "97942ec8-e986-4fe7-868c-a3387015dba5", 00:11:34.561 "is_configured": true, 00:11:34.561 "data_offset": 2048, 00:11:34.561 "data_size": 63488 00:11:34.561 }, 00:11:34.561 { 00:11:34.561 "name": "BaseBdev2", 00:11:34.561 "uuid": "1c01195f-67f2-4cbb-9a45-b5bb84c75a3c", 00:11:34.561 "is_configured": true, 00:11:34.561 "data_offset": 2048, 00:11:34.561 "data_size": 63488 00:11:34.561 }, 00:11:34.561 { 00:11:34.561 "name": "BaseBdev3", 00:11:34.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.561 "is_configured": false, 00:11:34.561 "data_offset": 0, 00:11:34.561 "data_size": 0 00:11:34.561 }, 00:11:34.561 { 00:11:34.561 "name": "BaseBdev4", 00:11:34.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.561 "is_configured": false, 00:11:34.561 "data_offset": 0, 00:11:34.561 "data_size": 0 00:11:34.561 } 00:11:34.561 ] 00:11:34.561 }' 00:11:34.561 09:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.561 09:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.821 [2024-10-21 09:56:11.393906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.821 BaseBdev3 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.821 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.080 [ 00:11:35.080 { 00:11:35.080 "name": "BaseBdev3", 00:11:35.080 "aliases": [ 00:11:35.080 "39483840-0fea-4275-9b20-f052328122a7" 00:11:35.080 ], 00:11:35.080 "product_name": "Malloc disk", 00:11:35.080 "block_size": 512, 00:11:35.080 "num_blocks": 65536, 00:11:35.080 "uuid": "39483840-0fea-4275-9b20-f052328122a7", 00:11:35.080 "assigned_rate_limits": { 00:11:35.080 "rw_ios_per_sec": 0, 00:11:35.080 "rw_mbytes_per_sec": 0, 00:11:35.080 "r_mbytes_per_sec": 0, 00:11:35.080 "w_mbytes_per_sec": 0 00:11:35.080 }, 00:11:35.080 "claimed": true, 00:11:35.080 "claim_type": "exclusive_write", 00:11:35.080 "zoned": false, 00:11:35.080 "supported_io_types": { 00:11:35.080 "read": true, 00:11:35.080 "write": true, 00:11:35.080 "unmap": true, 00:11:35.080 "flush": true, 00:11:35.080 "reset": true, 00:11:35.080 "nvme_admin": false, 00:11:35.080 "nvme_io": false, 00:11:35.080 "nvme_io_md": false, 00:11:35.080 "write_zeroes": true, 00:11:35.080 "zcopy": true, 00:11:35.080 "get_zone_info": false, 00:11:35.080 "zone_management": false, 00:11:35.080 "zone_append": false, 00:11:35.080 "compare": false, 00:11:35.080 "compare_and_write": false, 00:11:35.080 "abort": true, 00:11:35.080 "seek_hole": false, 00:11:35.080 "seek_data": false, 00:11:35.080 "copy": true, 00:11:35.080 "nvme_iov_md": false 00:11:35.080 }, 00:11:35.080 "memory_domains": [ 00:11:35.080 { 00:11:35.080 "dma_device_id": "system", 00:11:35.080 "dma_device_type": 1 00:11:35.080 }, 00:11:35.080 { 00:11:35.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.080 "dma_device_type": 2 00:11:35.080 } 00:11:35.080 ], 00:11:35.080 "driver_specific": {} 00:11:35.080 } 00:11:35.080 ] 00:11:35.080 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.080 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:35.080 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.081 "name": "Existed_Raid", 00:11:35.081 "uuid": "233fa0bc-eca7-4002-a8f7-f836ed781907", 00:11:35.081 "strip_size_kb": 64, 00:11:35.081 "state": "configuring", 00:11:35.081 "raid_level": "concat", 00:11:35.081 "superblock": true, 00:11:35.081 "num_base_bdevs": 4, 00:11:35.081 "num_base_bdevs_discovered": 3, 00:11:35.081 "num_base_bdevs_operational": 4, 00:11:35.081 "base_bdevs_list": [ 00:11:35.081 { 00:11:35.081 "name": "BaseBdev1", 00:11:35.081 "uuid": "97942ec8-e986-4fe7-868c-a3387015dba5", 00:11:35.081 "is_configured": true, 00:11:35.081 "data_offset": 2048, 00:11:35.081 "data_size": 63488 00:11:35.081 }, 00:11:35.081 { 00:11:35.081 "name": "BaseBdev2", 00:11:35.081 "uuid": "1c01195f-67f2-4cbb-9a45-b5bb84c75a3c", 00:11:35.081 "is_configured": true, 00:11:35.081 "data_offset": 2048, 00:11:35.081 "data_size": 63488 00:11:35.081 }, 00:11:35.081 { 00:11:35.081 "name": "BaseBdev3", 00:11:35.081 "uuid": "39483840-0fea-4275-9b20-f052328122a7", 00:11:35.081 "is_configured": true, 00:11:35.081 "data_offset": 2048, 00:11:35.081 "data_size": 63488 00:11:35.081 }, 00:11:35.081 { 00:11:35.081 "name": "BaseBdev4", 00:11:35.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.081 "is_configured": false, 00:11:35.081 "data_offset": 0, 00:11:35.081 "data_size": 0 00:11:35.081 } 00:11:35.081 ] 00:11:35.081 }' 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.081 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.340 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.340 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.340 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.599 [2024-10-21 09:56:11.945026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.599 [2024-10-21 09:56:11.945428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:35.599 [2024-10-21 09:56:11.945457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:35.599 [2024-10-21 09:56:11.945822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:35.599 BaseBdev4 00:11:35.599 [2024-10-21 09:56:11.946036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:35.599 [2024-10-21 09:56:11.946063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:11:35.599 [2024-10-21 09:56:11.946261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.599 [ 00:11:35.599 { 00:11:35.599 "name": "BaseBdev4", 00:11:35.599 "aliases": [ 00:11:35.599 "6b7ac2e4-13e9-4f59-aef6-e57544dc6a10" 00:11:35.599 ], 00:11:35.599 "product_name": "Malloc disk", 00:11:35.599 "block_size": 512, 00:11:35.599 "num_blocks": 65536, 00:11:35.599 "uuid": "6b7ac2e4-13e9-4f59-aef6-e57544dc6a10", 00:11:35.599 "assigned_rate_limits": { 00:11:35.599 "rw_ios_per_sec": 0, 00:11:35.599 "rw_mbytes_per_sec": 0, 00:11:35.599 "r_mbytes_per_sec": 0, 00:11:35.599 "w_mbytes_per_sec": 0 00:11:35.599 }, 00:11:35.599 "claimed": true, 00:11:35.599 "claim_type": "exclusive_write", 00:11:35.599 "zoned": false, 00:11:35.599 "supported_io_types": { 00:11:35.599 "read": true, 00:11:35.599 "write": true, 00:11:35.599 "unmap": true, 00:11:35.599 "flush": true, 00:11:35.599 "reset": true, 00:11:35.599 "nvme_admin": false, 00:11:35.599 "nvme_io": false, 00:11:35.599 "nvme_io_md": false, 00:11:35.599 "write_zeroes": true, 00:11:35.599 "zcopy": true, 00:11:35.599 "get_zone_info": false, 00:11:35.599 "zone_management": false, 00:11:35.599 "zone_append": false, 00:11:35.599 "compare": false, 00:11:35.599 "compare_and_write": false, 00:11:35.599 "abort": true, 00:11:35.599 "seek_hole": false, 00:11:35.599 "seek_data": false, 00:11:35.599 "copy": true, 00:11:35.599 "nvme_iov_md": false 00:11:35.599 }, 00:11:35.599 "memory_domains": [ 00:11:35.599 { 00:11:35.599 "dma_device_id": "system", 00:11:35.599 "dma_device_type": 1 00:11:35.599 }, 00:11:35.599 { 00:11:35.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.599 "dma_device_type": 2 00:11:35.599 } 00:11:35.599 ], 00:11:35.599 "driver_specific": {} 00:11:35.599 } 00:11:35.599 ] 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.599 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.600 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.600 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.600 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.600 09:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.600 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.600 09:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.600 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.600 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.600 "name": "Existed_Raid", 00:11:35.600 "uuid": "233fa0bc-eca7-4002-a8f7-f836ed781907", 00:11:35.600 "strip_size_kb": 64, 00:11:35.600 "state": "online", 00:11:35.600 "raid_level": "concat", 00:11:35.600 "superblock": true, 00:11:35.600 "num_base_bdevs": 4, 00:11:35.600 "num_base_bdevs_discovered": 4, 00:11:35.600 "num_base_bdevs_operational": 4, 00:11:35.600 "base_bdevs_list": [ 00:11:35.600 { 00:11:35.600 "name": "BaseBdev1", 00:11:35.600 "uuid": "97942ec8-e986-4fe7-868c-a3387015dba5", 00:11:35.600 "is_configured": true, 00:11:35.600 "data_offset": 2048, 00:11:35.600 "data_size": 63488 00:11:35.600 }, 00:11:35.600 { 00:11:35.600 "name": "BaseBdev2", 00:11:35.600 "uuid": "1c01195f-67f2-4cbb-9a45-b5bb84c75a3c", 00:11:35.600 "is_configured": true, 00:11:35.600 "data_offset": 2048, 00:11:35.600 "data_size": 63488 00:11:35.600 }, 00:11:35.600 { 00:11:35.600 "name": "BaseBdev3", 00:11:35.600 "uuid": "39483840-0fea-4275-9b20-f052328122a7", 00:11:35.600 "is_configured": true, 00:11:35.600 "data_offset": 2048, 00:11:35.600 "data_size": 63488 00:11:35.600 }, 00:11:35.600 { 00:11:35.600 "name": "BaseBdev4", 00:11:35.600 "uuid": "6b7ac2e4-13e9-4f59-aef6-e57544dc6a10", 00:11:35.600 "is_configured": true, 00:11:35.600 "data_offset": 2048, 00:11:35.600 "data_size": 63488 00:11:35.600 } 00:11:35.600 ] 00:11:35.600 }' 00:11:35.600 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.600 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.859 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.859 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.859 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.859 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.859 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.859 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.859 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.859 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.859 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.859 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.860 [2024-10-21 09:56:12.452685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.120 "name": "Existed_Raid", 00:11:36.120 "aliases": [ 00:11:36.120 "233fa0bc-eca7-4002-a8f7-f836ed781907" 00:11:36.120 ], 00:11:36.120 "product_name": "Raid Volume", 00:11:36.120 "block_size": 512, 00:11:36.120 "num_blocks": 253952, 00:11:36.120 "uuid": "233fa0bc-eca7-4002-a8f7-f836ed781907", 00:11:36.120 "assigned_rate_limits": { 00:11:36.120 "rw_ios_per_sec": 0, 00:11:36.120 "rw_mbytes_per_sec": 0, 00:11:36.120 "r_mbytes_per_sec": 0, 00:11:36.120 "w_mbytes_per_sec": 0 00:11:36.120 }, 00:11:36.120 "claimed": false, 00:11:36.120 "zoned": false, 00:11:36.120 "supported_io_types": { 00:11:36.120 "read": true, 00:11:36.120 "write": true, 00:11:36.120 "unmap": true, 00:11:36.120 "flush": true, 00:11:36.120 "reset": true, 00:11:36.120 "nvme_admin": false, 00:11:36.120 "nvme_io": false, 00:11:36.120 "nvme_io_md": false, 00:11:36.120 "write_zeroes": true, 00:11:36.120 "zcopy": false, 00:11:36.120 "get_zone_info": false, 00:11:36.120 "zone_management": false, 00:11:36.120 "zone_append": false, 00:11:36.120 "compare": false, 00:11:36.120 "compare_and_write": false, 00:11:36.120 "abort": false, 00:11:36.120 "seek_hole": false, 00:11:36.120 "seek_data": false, 00:11:36.120 "copy": false, 00:11:36.120 "nvme_iov_md": false 00:11:36.120 }, 00:11:36.120 "memory_domains": [ 00:11:36.120 { 00:11:36.120 "dma_device_id": "system", 00:11:36.120 "dma_device_type": 1 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.120 "dma_device_type": 2 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "dma_device_id": "system", 00:11:36.120 "dma_device_type": 1 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.120 "dma_device_type": 2 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "dma_device_id": "system", 00:11:36.120 "dma_device_type": 1 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.120 "dma_device_type": 2 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "dma_device_id": "system", 00:11:36.120 "dma_device_type": 1 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.120 "dma_device_type": 2 00:11:36.120 } 00:11:36.120 ], 00:11:36.120 "driver_specific": { 00:11:36.120 "raid": { 00:11:36.120 "uuid": "233fa0bc-eca7-4002-a8f7-f836ed781907", 00:11:36.120 "strip_size_kb": 64, 00:11:36.120 "state": "online", 00:11:36.120 "raid_level": "concat", 00:11:36.120 "superblock": true, 00:11:36.120 "num_base_bdevs": 4, 00:11:36.120 "num_base_bdevs_discovered": 4, 00:11:36.120 "num_base_bdevs_operational": 4, 00:11:36.120 "base_bdevs_list": [ 00:11:36.120 { 00:11:36.120 "name": "BaseBdev1", 00:11:36.120 "uuid": "97942ec8-e986-4fe7-868c-a3387015dba5", 00:11:36.120 "is_configured": true, 00:11:36.120 "data_offset": 2048, 00:11:36.120 "data_size": 63488 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "name": "BaseBdev2", 00:11:36.120 "uuid": "1c01195f-67f2-4cbb-9a45-b5bb84c75a3c", 00:11:36.120 "is_configured": true, 00:11:36.120 "data_offset": 2048, 00:11:36.120 "data_size": 63488 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "name": "BaseBdev3", 00:11:36.120 "uuid": "39483840-0fea-4275-9b20-f052328122a7", 00:11:36.120 "is_configured": true, 00:11:36.120 "data_offset": 2048, 00:11:36.120 "data_size": 63488 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "name": "BaseBdev4", 00:11:36.120 "uuid": "6b7ac2e4-13e9-4f59-aef6-e57544dc6a10", 00:11:36.120 "is_configured": true, 00:11:36.120 "data_offset": 2048, 00:11:36.120 "data_size": 63488 00:11:36.120 } 00:11:36.120 ] 00:11:36.120 } 00:11:36.120 } 00:11:36.120 }' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.120 BaseBdev2 00:11:36.120 BaseBdev3 00:11:36.120 BaseBdev4' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.120 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.120 [2024-10-21 09:56:12.707872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.120 [2024-10-21 09:56:12.707916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.120 [2024-10-21 09:56:12.707980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.380 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.380 "name": "Existed_Raid", 00:11:36.380 "uuid": "233fa0bc-eca7-4002-a8f7-f836ed781907", 00:11:36.380 "strip_size_kb": 64, 00:11:36.381 "state": "offline", 00:11:36.381 "raid_level": "concat", 00:11:36.381 "superblock": true, 00:11:36.381 "num_base_bdevs": 4, 00:11:36.381 "num_base_bdevs_discovered": 3, 00:11:36.381 "num_base_bdevs_operational": 3, 00:11:36.381 "base_bdevs_list": [ 00:11:36.381 { 00:11:36.381 "name": null, 00:11:36.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.381 "is_configured": false, 00:11:36.381 "data_offset": 0, 00:11:36.381 "data_size": 63488 00:11:36.381 }, 00:11:36.381 { 00:11:36.381 "name": "BaseBdev2", 00:11:36.381 "uuid": "1c01195f-67f2-4cbb-9a45-b5bb84c75a3c", 00:11:36.381 "is_configured": true, 00:11:36.381 "data_offset": 2048, 00:11:36.381 "data_size": 63488 00:11:36.381 }, 00:11:36.381 { 00:11:36.381 "name": "BaseBdev3", 00:11:36.381 "uuid": "39483840-0fea-4275-9b20-f052328122a7", 00:11:36.381 "is_configured": true, 00:11:36.381 "data_offset": 2048, 00:11:36.381 "data_size": 63488 00:11:36.381 }, 00:11:36.381 { 00:11:36.381 "name": "BaseBdev4", 00:11:36.381 "uuid": "6b7ac2e4-13e9-4f59-aef6-e57544dc6a10", 00:11:36.381 "is_configured": true, 00:11:36.381 "data_offset": 2048, 00:11:36.381 "data_size": 63488 00:11:36.381 } 00:11:36.381 ] 00:11:36.381 }' 00:11:36.381 09:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.381 09:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.640 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.640 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.640 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.640 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.640 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.640 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.640 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.910 [2024-10-21 09:56:13.244751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.910 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.911 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.911 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:36.911 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.911 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.911 [2024-10-21 09:56:13.415334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.186 [2024-10-21 09:56:13.586304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:37.186 [2024-10-21 09:56:13.586396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.186 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.447 BaseBdev2 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.447 [ 00:11:37.447 { 00:11:37.447 "name": "BaseBdev2", 00:11:37.447 "aliases": [ 00:11:37.447 "08148ee7-0129-4741-bf43-bb1e6fb5f9a4" 00:11:37.447 ], 00:11:37.447 "product_name": "Malloc disk", 00:11:37.447 "block_size": 512, 00:11:37.447 "num_blocks": 65536, 00:11:37.447 "uuid": "08148ee7-0129-4741-bf43-bb1e6fb5f9a4", 00:11:37.447 "assigned_rate_limits": { 00:11:37.447 "rw_ios_per_sec": 0, 00:11:37.447 "rw_mbytes_per_sec": 0, 00:11:37.447 "r_mbytes_per_sec": 0, 00:11:37.447 "w_mbytes_per_sec": 0 00:11:37.447 }, 00:11:37.447 "claimed": false, 00:11:37.447 "zoned": false, 00:11:37.447 "supported_io_types": { 00:11:37.447 "read": true, 00:11:37.447 "write": true, 00:11:37.447 "unmap": true, 00:11:37.447 "flush": true, 00:11:37.447 "reset": true, 00:11:37.447 "nvme_admin": false, 00:11:37.447 "nvme_io": false, 00:11:37.447 "nvme_io_md": false, 00:11:37.447 "write_zeroes": true, 00:11:37.447 "zcopy": true, 00:11:37.447 "get_zone_info": false, 00:11:37.447 "zone_management": false, 00:11:37.447 "zone_append": false, 00:11:37.447 "compare": false, 00:11:37.447 "compare_and_write": false, 00:11:37.447 "abort": true, 00:11:37.447 "seek_hole": false, 00:11:37.447 "seek_data": false, 00:11:37.447 "copy": true, 00:11:37.447 "nvme_iov_md": false 00:11:37.447 }, 00:11:37.447 "memory_domains": [ 00:11:37.447 { 00:11:37.447 "dma_device_id": "system", 00:11:37.447 "dma_device_type": 1 00:11:37.447 }, 00:11:37.447 { 00:11:37.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.447 "dma_device_type": 2 00:11:37.447 } 00:11:37.447 ], 00:11:37.447 "driver_specific": {} 00:11:37.447 } 00:11:37.447 ] 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.447 BaseBdev3 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.447 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.447 [ 00:11:37.447 { 00:11:37.447 "name": "BaseBdev3", 00:11:37.447 "aliases": [ 00:11:37.447 "e5e4a26f-ec02-493d-a84f-32146021b1ab" 00:11:37.447 ], 00:11:37.447 "product_name": "Malloc disk", 00:11:37.447 "block_size": 512, 00:11:37.447 "num_blocks": 65536, 00:11:37.447 "uuid": "e5e4a26f-ec02-493d-a84f-32146021b1ab", 00:11:37.447 "assigned_rate_limits": { 00:11:37.447 "rw_ios_per_sec": 0, 00:11:37.447 "rw_mbytes_per_sec": 0, 00:11:37.448 "r_mbytes_per_sec": 0, 00:11:37.448 "w_mbytes_per_sec": 0 00:11:37.448 }, 00:11:37.448 "claimed": false, 00:11:37.448 "zoned": false, 00:11:37.448 "supported_io_types": { 00:11:37.448 "read": true, 00:11:37.448 "write": true, 00:11:37.448 "unmap": true, 00:11:37.448 "flush": true, 00:11:37.448 "reset": true, 00:11:37.448 "nvme_admin": false, 00:11:37.448 "nvme_io": false, 00:11:37.448 "nvme_io_md": false, 00:11:37.448 "write_zeroes": true, 00:11:37.448 "zcopy": true, 00:11:37.448 "get_zone_info": false, 00:11:37.448 "zone_management": false, 00:11:37.448 "zone_append": false, 00:11:37.448 "compare": false, 00:11:37.448 "compare_and_write": false, 00:11:37.448 "abort": true, 00:11:37.448 "seek_hole": false, 00:11:37.448 "seek_data": false, 00:11:37.448 "copy": true, 00:11:37.448 "nvme_iov_md": false 00:11:37.448 }, 00:11:37.448 "memory_domains": [ 00:11:37.448 { 00:11:37.448 "dma_device_id": "system", 00:11:37.448 "dma_device_type": 1 00:11:37.448 }, 00:11:37.448 { 00:11:37.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.448 "dma_device_type": 2 00:11:37.448 } 00:11:37.448 ], 00:11:37.448 "driver_specific": {} 00:11:37.448 } 00:11:37.448 ] 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.448 BaseBdev4 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.448 09:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.448 [ 00:11:37.448 { 00:11:37.448 "name": "BaseBdev4", 00:11:37.448 "aliases": [ 00:11:37.448 "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d" 00:11:37.448 ], 00:11:37.448 "product_name": "Malloc disk", 00:11:37.448 "block_size": 512, 00:11:37.448 "num_blocks": 65536, 00:11:37.448 "uuid": "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d", 00:11:37.448 "assigned_rate_limits": { 00:11:37.448 "rw_ios_per_sec": 0, 00:11:37.448 "rw_mbytes_per_sec": 0, 00:11:37.448 "r_mbytes_per_sec": 0, 00:11:37.448 "w_mbytes_per_sec": 0 00:11:37.448 }, 00:11:37.448 "claimed": false, 00:11:37.448 "zoned": false, 00:11:37.448 "supported_io_types": { 00:11:37.448 "read": true, 00:11:37.448 "write": true, 00:11:37.448 "unmap": true, 00:11:37.448 "flush": true, 00:11:37.448 "reset": true, 00:11:37.448 "nvme_admin": false, 00:11:37.448 "nvme_io": false, 00:11:37.448 "nvme_io_md": false, 00:11:37.448 "write_zeroes": true, 00:11:37.448 "zcopy": true, 00:11:37.448 "get_zone_info": false, 00:11:37.448 "zone_management": false, 00:11:37.448 "zone_append": false, 00:11:37.448 "compare": false, 00:11:37.448 "compare_and_write": false, 00:11:37.448 "abort": true, 00:11:37.448 "seek_hole": false, 00:11:37.448 "seek_data": false, 00:11:37.448 "copy": true, 00:11:37.448 "nvme_iov_md": false 00:11:37.448 }, 00:11:37.448 "memory_domains": [ 00:11:37.448 { 00:11:37.448 "dma_device_id": "system", 00:11:37.448 "dma_device_type": 1 00:11:37.448 }, 00:11:37.448 { 00:11:37.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.448 "dma_device_type": 2 00:11:37.448 } 00:11:37.448 ], 00:11:37.448 "driver_specific": {} 00:11:37.448 } 00:11:37.448 ] 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.448 [2024-10-21 09:56:14.015891] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.448 [2024-10-21 09:56:14.015947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.448 [2024-10-21 09:56:14.015975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.448 [2024-10-21 09:56:14.018292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.448 [2024-10-21 09:56:14.018366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.448 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.707 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.707 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.707 "name": "Existed_Raid", 00:11:37.707 "uuid": "1d6e83bf-cf54-442e-a6ee-c3c4517c4856", 00:11:37.707 "strip_size_kb": 64, 00:11:37.708 "state": "configuring", 00:11:37.708 "raid_level": "concat", 00:11:37.708 "superblock": true, 00:11:37.708 "num_base_bdevs": 4, 00:11:37.708 "num_base_bdevs_discovered": 3, 00:11:37.708 "num_base_bdevs_operational": 4, 00:11:37.708 "base_bdevs_list": [ 00:11:37.708 { 00:11:37.708 "name": "BaseBdev1", 00:11:37.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.708 "is_configured": false, 00:11:37.708 "data_offset": 0, 00:11:37.708 "data_size": 0 00:11:37.708 }, 00:11:37.708 { 00:11:37.708 "name": "BaseBdev2", 00:11:37.708 "uuid": "08148ee7-0129-4741-bf43-bb1e6fb5f9a4", 00:11:37.708 "is_configured": true, 00:11:37.708 "data_offset": 2048, 00:11:37.708 "data_size": 63488 00:11:37.708 }, 00:11:37.708 { 00:11:37.708 "name": "BaseBdev3", 00:11:37.708 "uuid": "e5e4a26f-ec02-493d-a84f-32146021b1ab", 00:11:37.708 "is_configured": true, 00:11:37.708 "data_offset": 2048, 00:11:37.708 "data_size": 63488 00:11:37.708 }, 00:11:37.708 { 00:11:37.708 "name": "BaseBdev4", 00:11:37.708 "uuid": "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d", 00:11:37.708 "is_configured": true, 00:11:37.708 "data_offset": 2048, 00:11:37.708 "data_size": 63488 00:11:37.708 } 00:11:37.708 ] 00:11:37.708 }' 00:11:37.708 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.708 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.967 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:37.967 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.968 [2024-10-21 09:56:14.463195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.968 "name": "Existed_Raid", 00:11:37.968 "uuid": "1d6e83bf-cf54-442e-a6ee-c3c4517c4856", 00:11:37.968 "strip_size_kb": 64, 00:11:37.968 "state": "configuring", 00:11:37.968 "raid_level": "concat", 00:11:37.968 "superblock": true, 00:11:37.968 "num_base_bdevs": 4, 00:11:37.968 "num_base_bdevs_discovered": 2, 00:11:37.968 "num_base_bdevs_operational": 4, 00:11:37.968 "base_bdevs_list": [ 00:11:37.968 { 00:11:37.968 "name": "BaseBdev1", 00:11:37.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.968 "is_configured": false, 00:11:37.968 "data_offset": 0, 00:11:37.968 "data_size": 0 00:11:37.968 }, 00:11:37.968 { 00:11:37.968 "name": null, 00:11:37.968 "uuid": "08148ee7-0129-4741-bf43-bb1e6fb5f9a4", 00:11:37.968 "is_configured": false, 00:11:37.968 "data_offset": 0, 00:11:37.968 "data_size": 63488 00:11:37.968 }, 00:11:37.968 { 00:11:37.968 "name": "BaseBdev3", 00:11:37.968 "uuid": "e5e4a26f-ec02-493d-a84f-32146021b1ab", 00:11:37.968 "is_configured": true, 00:11:37.968 "data_offset": 2048, 00:11:37.968 "data_size": 63488 00:11:37.968 }, 00:11:37.968 { 00:11:37.968 "name": "BaseBdev4", 00:11:37.968 "uuid": "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d", 00:11:37.968 "is_configured": true, 00:11:37.968 "data_offset": 2048, 00:11:37.968 "data_size": 63488 00:11:37.968 } 00:11:37.968 ] 00:11:37.968 }' 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.968 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.538 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.538 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.538 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.538 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.538 09:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.538 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.538 09:56:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 [2024-10-21 09:56:15.019547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.538 BaseBdev1 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.538 [ 00:11:38.538 { 00:11:38.538 "name": "BaseBdev1", 00:11:38.538 "aliases": [ 00:11:38.538 "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e" 00:11:38.538 ], 00:11:38.538 "product_name": "Malloc disk", 00:11:38.538 "block_size": 512, 00:11:38.538 "num_blocks": 65536, 00:11:38.538 "uuid": "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e", 00:11:38.538 "assigned_rate_limits": { 00:11:38.538 "rw_ios_per_sec": 0, 00:11:38.538 "rw_mbytes_per_sec": 0, 00:11:38.538 "r_mbytes_per_sec": 0, 00:11:38.538 "w_mbytes_per_sec": 0 00:11:38.538 }, 00:11:38.538 "claimed": true, 00:11:38.538 "claim_type": "exclusive_write", 00:11:38.538 "zoned": false, 00:11:38.538 "supported_io_types": { 00:11:38.538 "read": true, 00:11:38.538 "write": true, 00:11:38.538 "unmap": true, 00:11:38.538 "flush": true, 00:11:38.538 "reset": true, 00:11:38.538 "nvme_admin": false, 00:11:38.538 "nvme_io": false, 00:11:38.538 "nvme_io_md": false, 00:11:38.538 "write_zeroes": true, 00:11:38.538 "zcopy": true, 00:11:38.538 "get_zone_info": false, 00:11:38.538 "zone_management": false, 00:11:38.538 "zone_append": false, 00:11:38.538 "compare": false, 00:11:38.538 "compare_and_write": false, 00:11:38.538 "abort": true, 00:11:38.538 "seek_hole": false, 00:11:38.538 "seek_data": false, 00:11:38.538 "copy": true, 00:11:38.538 "nvme_iov_md": false 00:11:38.538 }, 00:11:38.538 "memory_domains": [ 00:11:38.538 { 00:11:38.538 "dma_device_id": "system", 00:11:38.538 "dma_device_type": 1 00:11:38.538 }, 00:11:38.538 { 00:11:38.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.538 "dma_device_type": 2 00:11:38.538 } 00:11:38.538 ], 00:11:38.538 "driver_specific": {} 00:11:38.538 } 00:11:38.538 ] 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.538 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.539 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.539 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.539 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.539 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.539 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.539 "name": "Existed_Raid", 00:11:38.539 "uuid": "1d6e83bf-cf54-442e-a6ee-c3c4517c4856", 00:11:38.539 "strip_size_kb": 64, 00:11:38.539 "state": "configuring", 00:11:38.539 "raid_level": "concat", 00:11:38.539 "superblock": true, 00:11:38.539 "num_base_bdevs": 4, 00:11:38.539 "num_base_bdevs_discovered": 3, 00:11:38.539 "num_base_bdevs_operational": 4, 00:11:38.539 "base_bdevs_list": [ 00:11:38.539 { 00:11:38.539 "name": "BaseBdev1", 00:11:38.539 "uuid": "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e", 00:11:38.539 "is_configured": true, 00:11:38.539 "data_offset": 2048, 00:11:38.539 "data_size": 63488 00:11:38.539 }, 00:11:38.539 { 00:11:38.539 "name": null, 00:11:38.539 "uuid": "08148ee7-0129-4741-bf43-bb1e6fb5f9a4", 00:11:38.539 "is_configured": false, 00:11:38.539 "data_offset": 0, 00:11:38.539 "data_size": 63488 00:11:38.539 }, 00:11:38.539 { 00:11:38.539 "name": "BaseBdev3", 00:11:38.539 "uuid": "e5e4a26f-ec02-493d-a84f-32146021b1ab", 00:11:38.539 "is_configured": true, 00:11:38.539 "data_offset": 2048, 00:11:38.539 "data_size": 63488 00:11:38.539 }, 00:11:38.539 { 00:11:38.539 "name": "BaseBdev4", 00:11:38.539 "uuid": "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d", 00:11:38.539 "is_configured": true, 00:11:38.539 "data_offset": 2048, 00:11:38.539 "data_size": 63488 00:11:38.539 } 00:11:38.539 ] 00:11:38.539 }' 00:11:38.539 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.539 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.108 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.108 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.108 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.108 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.108 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.108 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:39.108 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:39.108 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.108 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.108 [2024-10-21 09:56:15.590782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.108 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.109 "name": "Existed_Raid", 00:11:39.109 "uuid": "1d6e83bf-cf54-442e-a6ee-c3c4517c4856", 00:11:39.109 "strip_size_kb": 64, 00:11:39.109 "state": "configuring", 00:11:39.109 "raid_level": "concat", 00:11:39.109 "superblock": true, 00:11:39.109 "num_base_bdevs": 4, 00:11:39.109 "num_base_bdevs_discovered": 2, 00:11:39.109 "num_base_bdevs_operational": 4, 00:11:39.109 "base_bdevs_list": [ 00:11:39.109 { 00:11:39.109 "name": "BaseBdev1", 00:11:39.109 "uuid": "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e", 00:11:39.109 "is_configured": true, 00:11:39.109 "data_offset": 2048, 00:11:39.109 "data_size": 63488 00:11:39.109 }, 00:11:39.109 { 00:11:39.109 "name": null, 00:11:39.109 "uuid": "08148ee7-0129-4741-bf43-bb1e6fb5f9a4", 00:11:39.109 "is_configured": false, 00:11:39.109 "data_offset": 0, 00:11:39.109 "data_size": 63488 00:11:39.109 }, 00:11:39.109 { 00:11:39.109 "name": null, 00:11:39.109 "uuid": "e5e4a26f-ec02-493d-a84f-32146021b1ab", 00:11:39.109 "is_configured": false, 00:11:39.109 "data_offset": 0, 00:11:39.109 "data_size": 63488 00:11:39.109 }, 00:11:39.109 { 00:11:39.109 "name": "BaseBdev4", 00:11:39.109 "uuid": "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d", 00:11:39.109 "is_configured": true, 00:11:39.109 "data_offset": 2048, 00:11:39.109 "data_size": 63488 00:11:39.109 } 00:11:39.109 ] 00:11:39.109 }' 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.109 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.678 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.679 09:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.679 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.679 09:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.679 [2024-10-21 09:56:16.042123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.679 "name": "Existed_Raid", 00:11:39.679 "uuid": "1d6e83bf-cf54-442e-a6ee-c3c4517c4856", 00:11:39.679 "strip_size_kb": 64, 00:11:39.679 "state": "configuring", 00:11:39.679 "raid_level": "concat", 00:11:39.679 "superblock": true, 00:11:39.679 "num_base_bdevs": 4, 00:11:39.679 "num_base_bdevs_discovered": 3, 00:11:39.679 "num_base_bdevs_operational": 4, 00:11:39.679 "base_bdevs_list": [ 00:11:39.679 { 00:11:39.679 "name": "BaseBdev1", 00:11:39.679 "uuid": "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e", 00:11:39.679 "is_configured": true, 00:11:39.679 "data_offset": 2048, 00:11:39.679 "data_size": 63488 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "name": null, 00:11:39.679 "uuid": "08148ee7-0129-4741-bf43-bb1e6fb5f9a4", 00:11:39.679 "is_configured": false, 00:11:39.679 "data_offset": 0, 00:11:39.679 "data_size": 63488 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "name": "BaseBdev3", 00:11:39.679 "uuid": "e5e4a26f-ec02-493d-a84f-32146021b1ab", 00:11:39.679 "is_configured": true, 00:11:39.679 "data_offset": 2048, 00:11:39.679 "data_size": 63488 00:11:39.679 }, 00:11:39.679 { 00:11:39.679 "name": "BaseBdev4", 00:11:39.679 "uuid": "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d", 00:11:39.679 "is_configured": true, 00:11:39.679 "data_offset": 2048, 00:11:39.679 "data_size": 63488 00:11:39.679 } 00:11:39.679 ] 00:11:39.679 }' 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.679 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.939 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.939 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.939 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.939 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.198 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.199 [2024-10-21 09:56:16.569281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.199 "name": "Existed_Raid", 00:11:40.199 "uuid": "1d6e83bf-cf54-442e-a6ee-c3c4517c4856", 00:11:40.199 "strip_size_kb": 64, 00:11:40.199 "state": "configuring", 00:11:40.199 "raid_level": "concat", 00:11:40.199 "superblock": true, 00:11:40.199 "num_base_bdevs": 4, 00:11:40.199 "num_base_bdevs_discovered": 2, 00:11:40.199 "num_base_bdevs_operational": 4, 00:11:40.199 "base_bdevs_list": [ 00:11:40.199 { 00:11:40.199 "name": null, 00:11:40.199 "uuid": "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e", 00:11:40.199 "is_configured": false, 00:11:40.199 "data_offset": 0, 00:11:40.199 "data_size": 63488 00:11:40.199 }, 00:11:40.199 { 00:11:40.199 "name": null, 00:11:40.199 "uuid": "08148ee7-0129-4741-bf43-bb1e6fb5f9a4", 00:11:40.199 "is_configured": false, 00:11:40.199 "data_offset": 0, 00:11:40.199 "data_size": 63488 00:11:40.199 }, 00:11:40.199 { 00:11:40.199 "name": "BaseBdev3", 00:11:40.199 "uuid": "e5e4a26f-ec02-493d-a84f-32146021b1ab", 00:11:40.199 "is_configured": true, 00:11:40.199 "data_offset": 2048, 00:11:40.199 "data_size": 63488 00:11:40.199 }, 00:11:40.199 { 00:11:40.199 "name": "BaseBdev4", 00:11:40.199 "uuid": "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d", 00:11:40.199 "is_configured": true, 00:11:40.199 "data_offset": 2048, 00:11:40.199 "data_size": 63488 00:11:40.199 } 00:11:40.199 ] 00:11:40.199 }' 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.199 09:56:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.768 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.768 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:40.768 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.768 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.768 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.768 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:40.768 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:40.768 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.768 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.769 [2024-10-21 09:56:17.214374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.769 "name": "Existed_Raid", 00:11:40.769 "uuid": "1d6e83bf-cf54-442e-a6ee-c3c4517c4856", 00:11:40.769 "strip_size_kb": 64, 00:11:40.769 "state": "configuring", 00:11:40.769 "raid_level": "concat", 00:11:40.769 "superblock": true, 00:11:40.769 "num_base_bdevs": 4, 00:11:40.769 "num_base_bdevs_discovered": 3, 00:11:40.769 "num_base_bdevs_operational": 4, 00:11:40.769 "base_bdevs_list": [ 00:11:40.769 { 00:11:40.769 "name": null, 00:11:40.769 "uuid": "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e", 00:11:40.769 "is_configured": false, 00:11:40.769 "data_offset": 0, 00:11:40.769 "data_size": 63488 00:11:40.769 }, 00:11:40.769 { 00:11:40.769 "name": "BaseBdev2", 00:11:40.769 "uuid": "08148ee7-0129-4741-bf43-bb1e6fb5f9a4", 00:11:40.769 "is_configured": true, 00:11:40.769 "data_offset": 2048, 00:11:40.769 "data_size": 63488 00:11:40.769 }, 00:11:40.769 { 00:11:40.769 "name": "BaseBdev3", 00:11:40.769 "uuid": "e5e4a26f-ec02-493d-a84f-32146021b1ab", 00:11:40.769 "is_configured": true, 00:11:40.769 "data_offset": 2048, 00:11:40.769 "data_size": 63488 00:11:40.769 }, 00:11:40.769 { 00:11:40.769 "name": "BaseBdev4", 00:11:40.769 "uuid": "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d", 00:11:40.769 "is_configured": true, 00:11:40.769 "data_offset": 2048, 00:11:40.769 "data_size": 63488 00:11:40.769 } 00:11:40.769 ] 00:11:40.769 }' 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.769 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.337 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.338 [2024-10-21 09:56:17.841216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:41.338 [2024-10-21 09:56:17.841552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:41.338 [2024-10-21 09:56:17.841593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:41.338 [2024-10-21 09:56:17.841954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:41.338 [2024-10-21 09:56:17.842160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:41.338 [2024-10-21 09:56:17.842188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:11:41.338 NewBaseBdev 00:11:41.338 [2024-10-21 09:56:17.842365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.338 [ 00:11:41.338 { 00:11:41.338 "name": "NewBaseBdev", 00:11:41.338 "aliases": [ 00:11:41.338 "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e" 00:11:41.338 ], 00:11:41.338 "product_name": "Malloc disk", 00:11:41.338 "block_size": 512, 00:11:41.338 "num_blocks": 65536, 00:11:41.338 "uuid": "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e", 00:11:41.338 "assigned_rate_limits": { 00:11:41.338 "rw_ios_per_sec": 0, 00:11:41.338 "rw_mbytes_per_sec": 0, 00:11:41.338 "r_mbytes_per_sec": 0, 00:11:41.338 "w_mbytes_per_sec": 0 00:11:41.338 }, 00:11:41.338 "claimed": true, 00:11:41.338 "claim_type": "exclusive_write", 00:11:41.338 "zoned": false, 00:11:41.338 "supported_io_types": { 00:11:41.338 "read": true, 00:11:41.338 "write": true, 00:11:41.338 "unmap": true, 00:11:41.338 "flush": true, 00:11:41.338 "reset": true, 00:11:41.338 "nvme_admin": false, 00:11:41.338 "nvme_io": false, 00:11:41.338 "nvme_io_md": false, 00:11:41.338 "write_zeroes": true, 00:11:41.338 "zcopy": true, 00:11:41.338 "get_zone_info": false, 00:11:41.338 "zone_management": false, 00:11:41.338 "zone_append": false, 00:11:41.338 "compare": false, 00:11:41.338 "compare_and_write": false, 00:11:41.338 "abort": true, 00:11:41.338 "seek_hole": false, 00:11:41.338 "seek_data": false, 00:11:41.338 "copy": true, 00:11:41.338 "nvme_iov_md": false 00:11:41.338 }, 00:11:41.338 "memory_domains": [ 00:11:41.338 { 00:11:41.338 "dma_device_id": "system", 00:11:41.338 "dma_device_type": 1 00:11:41.338 }, 00:11:41.338 { 00:11:41.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.338 "dma_device_type": 2 00:11:41.338 } 00:11:41.338 ], 00:11:41.338 "driver_specific": {} 00:11:41.338 } 00:11:41.338 ] 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.338 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.597 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.597 "name": "Existed_Raid", 00:11:41.597 "uuid": "1d6e83bf-cf54-442e-a6ee-c3c4517c4856", 00:11:41.597 "strip_size_kb": 64, 00:11:41.597 "state": "online", 00:11:41.597 "raid_level": "concat", 00:11:41.597 "superblock": true, 00:11:41.597 "num_base_bdevs": 4, 00:11:41.597 "num_base_bdevs_discovered": 4, 00:11:41.597 "num_base_bdevs_operational": 4, 00:11:41.597 "base_bdevs_list": [ 00:11:41.597 { 00:11:41.597 "name": "NewBaseBdev", 00:11:41.597 "uuid": "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e", 00:11:41.597 "is_configured": true, 00:11:41.597 "data_offset": 2048, 00:11:41.597 "data_size": 63488 00:11:41.597 }, 00:11:41.597 { 00:11:41.597 "name": "BaseBdev2", 00:11:41.597 "uuid": "08148ee7-0129-4741-bf43-bb1e6fb5f9a4", 00:11:41.597 "is_configured": true, 00:11:41.597 "data_offset": 2048, 00:11:41.597 "data_size": 63488 00:11:41.597 }, 00:11:41.597 { 00:11:41.597 "name": "BaseBdev3", 00:11:41.597 "uuid": "e5e4a26f-ec02-493d-a84f-32146021b1ab", 00:11:41.597 "is_configured": true, 00:11:41.597 "data_offset": 2048, 00:11:41.597 "data_size": 63488 00:11:41.597 }, 00:11:41.597 { 00:11:41.597 "name": "BaseBdev4", 00:11:41.597 "uuid": "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d", 00:11:41.597 "is_configured": true, 00:11:41.597 "data_offset": 2048, 00:11:41.597 "data_size": 63488 00:11:41.597 } 00:11:41.597 ] 00:11:41.597 }' 00:11:41.597 09:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.597 09:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.857 [2024-10-21 09:56:18.320972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.857 "name": "Existed_Raid", 00:11:41.857 "aliases": [ 00:11:41.857 "1d6e83bf-cf54-442e-a6ee-c3c4517c4856" 00:11:41.857 ], 00:11:41.857 "product_name": "Raid Volume", 00:11:41.857 "block_size": 512, 00:11:41.857 "num_blocks": 253952, 00:11:41.857 "uuid": "1d6e83bf-cf54-442e-a6ee-c3c4517c4856", 00:11:41.857 "assigned_rate_limits": { 00:11:41.857 "rw_ios_per_sec": 0, 00:11:41.857 "rw_mbytes_per_sec": 0, 00:11:41.857 "r_mbytes_per_sec": 0, 00:11:41.857 "w_mbytes_per_sec": 0 00:11:41.857 }, 00:11:41.857 "claimed": false, 00:11:41.857 "zoned": false, 00:11:41.857 "supported_io_types": { 00:11:41.857 "read": true, 00:11:41.857 "write": true, 00:11:41.857 "unmap": true, 00:11:41.857 "flush": true, 00:11:41.857 "reset": true, 00:11:41.857 "nvme_admin": false, 00:11:41.857 "nvme_io": false, 00:11:41.857 "nvme_io_md": false, 00:11:41.857 "write_zeroes": true, 00:11:41.857 "zcopy": false, 00:11:41.857 "get_zone_info": false, 00:11:41.857 "zone_management": false, 00:11:41.857 "zone_append": false, 00:11:41.857 "compare": false, 00:11:41.857 "compare_and_write": false, 00:11:41.857 "abort": false, 00:11:41.857 "seek_hole": false, 00:11:41.857 "seek_data": false, 00:11:41.857 "copy": false, 00:11:41.857 "nvme_iov_md": false 00:11:41.857 }, 00:11:41.857 "memory_domains": [ 00:11:41.857 { 00:11:41.857 "dma_device_id": "system", 00:11:41.857 "dma_device_type": 1 00:11:41.857 }, 00:11:41.857 { 00:11:41.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.857 "dma_device_type": 2 00:11:41.857 }, 00:11:41.857 { 00:11:41.857 "dma_device_id": "system", 00:11:41.857 "dma_device_type": 1 00:11:41.857 }, 00:11:41.857 { 00:11:41.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.857 "dma_device_type": 2 00:11:41.857 }, 00:11:41.857 { 00:11:41.857 "dma_device_id": "system", 00:11:41.857 "dma_device_type": 1 00:11:41.857 }, 00:11:41.857 { 00:11:41.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.857 "dma_device_type": 2 00:11:41.857 }, 00:11:41.857 { 00:11:41.857 "dma_device_id": "system", 00:11:41.857 "dma_device_type": 1 00:11:41.857 }, 00:11:41.857 { 00:11:41.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.857 "dma_device_type": 2 00:11:41.857 } 00:11:41.857 ], 00:11:41.857 "driver_specific": { 00:11:41.857 "raid": { 00:11:41.857 "uuid": "1d6e83bf-cf54-442e-a6ee-c3c4517c4856", 00:11:41.857 "strip_size_kb": 64, 00:11:41.857 "state": "online", 00:11:41.857 "raid_level": "concat", 00:11:41.857 "superblock": true, 00:11:41.857 "num_base_bdevs": 4, 00:11:41.857 "num_base_bdevs_discovered": 4, 00:11:41.857 "num_base_bdevs_operational": 4, 00:11:41.857 "base_bdevs_list": [ 00:11:41.857 { 00:11:41.857 "name": "NewBaseBdev", 00:11:41.857 "uuid": "ccd8a97d-bcc3-4fe6-ab8d-eb8eeaf2791e", 00:11:41.857 "is_configured": true, 00:11:41.857 "data_offset": 2048, 00:11:41.857 "data_size": 63488 00:11:41.857 }, 00:11:41.857 { 00:11:41.857 "name": "BaseBdev2", 00:11:41.857 "uuid": "08148ee7-0129-4741-bf43-bb1e6fb5f9a4", 00:11:41.857 "is_configured": true, 00:11:41.857 "data_offset": 2048, 00:11:41.857 "data_size": 63488 00:11:41.857 }, 00:11:41.857 { 00:11:41.857 "name": "BaseBdev3", 00:11:41.857 "uuid": "e5e4a26f-ec02-493d-a84f-32146021b1ab", 00:11:41.857 "is_configured": true, 00:11:41.857 "data_offset": 2048, 00:11:41.857 "data_size": 63488 00:11:41.857 }, 00:11:41.857 { 00:11:41.857 "name": "BaseBdev4", 00:11:41.857 "uuid": "b501e54c-8323-4b1b-ad6e-6ea22c4ac31d", 00:11:41.857 "is_configured": true, 00:11:41.857 "data_offset": 2048, 00:11:41.857 "data_size": 63488 00:11:41.857 } 00:11:41.857 ] 00:11:41.857 } 00:11:41.857 } 00:11:41.857 }' 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:41.857 BaseBdev2 00:11:41.857 BaseBdev3 00:11:41.857 BaseBdev4' 00:11:41.857 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.118 [2024-10-21 09:56:18.636022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.118 [2024-10-21 09:56:18.636065] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.118 [2024-10-21 09:56:18.636181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.118 [2024-10-21 09:56:18.636268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.118 [2024-10-21 09:56:18.636281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71538 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 71538 ']' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 71538 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71538 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71538' 00:11:42.118 killing process with pid 71538 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 71538 00:11:42.118 [2024-10-21 09:56:18.684003] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.118 09:56:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 71538 00:11:42.691 [2024-10-21 09:56:19.115583] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.076 09:56:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:44.076 00:11:44.076 real 0m11.954s 00:11:44.076 user 0m18.539s 00:11:44.076 sys 0m2.400s 00:11:44.076 09:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.076 09:56:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.076 ************************************ 00:11:44.076 END TEST raid_state_function_test_sb 00:11:44.076 ************************************ 00:11:44.076 09:56:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:44.076 09:56:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:44.076 09:56:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.076 09:56:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.076 ************************************ 00:11:44.076 START TEST raid_superblock_test 00:11:44.076 ************************************ 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72208 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72208 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72208 ']' 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.076 09:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.076 [2024-10-21 09:56:20.538150] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:44.076 [2024-10-21 09:56:20.538299] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72208 ] 00:11:44.336 [2024-10-21 09:56:20.706329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.336 [2024-10-21 09:56:20.858779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.597 [2024-10-21 09:56:21.126496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.597 [2024-10-21 09:56:21.126551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.859 malloc1 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.859 [2024-10-21 09:56:21.437364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:44.859 [2024-10-21 09:56:21.437450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.859 [2024-10-21 09:56:21.437480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:11:44.859 [2024-10-21 09:56:21.437492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.859 [2024-10-21 09:56:21.439993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.859 [2024-10-21 09:56:21.440038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:44.859 pt1 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.859 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.119 malloc2 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.119 [2024-10-21 09:56:21.502302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.119 [2024-10-21 09:56:21.502388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.119 [2024-10-21 09:56:21.502419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:11:45.119 [2024-10-21 09:56:21.502433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.119 [2024-10-21 09:56:21.505262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.119 [2024-10-21 09:56:21.505310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.119 pt2 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.119 malloc3 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.119 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.119 [2024-10-21 09:56:21.578834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:45.119 [2024-10-21 09:56:21.578907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.119 [2024-10-21 09:56:21.578935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:11:45.119 [2024-10-21 09:56:21.578948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.119 [2024-10-21 09:56:21.581420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.120 [2024-10-21 09:56:21.581466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:45.120 pt3 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.120 malloc4 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.120 [2024-10-21 09:56:21.652089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:45.120 [2024-10-21 09:56:21.652163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.120 [2024-10-21 09:56:21.652188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:45.120 [2024-10-21 09:56:21.652200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.120 [2024-10-21 09:56:21.654946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.120 [2024-10-21 09:56:21.654993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:45.120 pt4 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.120 [2024-10-21 09:56:21.664170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.120 [2024-10-21 09:56:21.666543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.120 [2024-10-21 09:56:21.666668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.120 [2024-10-21 09:56:21.666753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:45.120 [2024-10-21 09:56:21.666996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:11:45.120 [2024-10-21 09:56:21.667021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:45.120 [2024-10-21 09:56:21.667365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:45.120 [2024-10-21 09:56:21.667629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:11:45.120 [2024-10-21 09:56:21.667655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:11:45.120 [2024-10-21 09:56:21.667881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.120 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.381 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.381 "name": "raid_bdev1", 00:11:45.381 "uuid": "9a54f84c-a924-4bbb-9e50-df6a4f08fcc5", 00:11:45.381 "strip_size_kb": 64, 00:11:45.381 "state": "online", 00:11:45.381 "raid_level": "concat", 00:11:45.381 "superblock": true, 00:11:45.381 "num_base_bdevs": 4, 00:11:45.381 "num_base_bdevs_discovered": 4, 00:11:45.381 "num_base_bdevs_operational": 4, 00:11:45.381 "base_bdevs_list": [ 00:11:45.381 { 00:11:45.381 "name": "pt1", 00:11:45.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.381 "is_configured": true, 00:11:45.381 "data_offset": 2048, 00:11:45.381 "data_size": 63488 00:11:45.381 }, 00:11:45.381 { 00:11:45.381 "name": "pt2", 00:11:45.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.381 "is_configured": true, 00:11:45.381 "data_offset": 2048, 00:11:45.381 "data_size": 63488 00:11:45.381 }, 00:11:45.381 { 00:11:45.381 "name": "pt3", 00:11:45.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.381 "is_configured": true, 00:11:45.381 "data_offset": 2048, 00:11:45.381 "data_size": 63488 00:11:45.381 }, 00:11:45.381 { 00:11:45.381 "name": "pt4", 00:11:45.381 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.381 "is_configured": true, 00:11:45.381 "data_offset": 2048, 00:11:45.381 "data_size": 63488 00:11:45.381 } 00:11:45.381 ] 00:11:45.381 }' 00:11:45.381 09:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.381 09:56:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.641 [2024-10-21 09:56:22.131781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.641 "name": "raid_bdev1", 00:11:45.641 "aliases": [ 00:11:45.641 "9a54f84c-a924-4bbb-9e50-df6a4f08fcc5" 00:11:45.641 ], 00:11:45.641 "product_name": "Raid Volume", 00:11:45.641 "block_size": 512, 00:11:45.641 "num_blocks": 253952, 00:11:45.641 "uuid": "9a54f84c-a924-4bbb-9e50-df6a4f08fcc5", 00:11:45.641 "assigned_rate_limits": { 00:11:45.641 "rw_ios_per_sec": 0, 00:11:45.641 "rw_mbytes_per_sec": 0, 00:11:45.641 "r_mbytes_per_sec": 0, 00:11:45.641 "w_mbytes_per_sec": 0 00:11:45.641 }, 00:11:45.641 "claimed": false, 00:11:45.641 "zoned": false, 00:11:45.641 "supported_io_types": { 00:11:45.641 "read": true, 00:11:45.641 "write": true, 00:11:45.641 "unmap": true, 00:11:45.641 "flush": true, 00:11:45.641 "reset": true, 00:11:45.641 "nvme_admin": false, 00:11:45.641 "nvme_io": false, 00:11:45.641 "nvme_io_md": false, 00:11:45.641 "write_zeroes": true, 00:11:45.641 "zcopy": false, 00:11:45.641 "get_zone_info": false, 00:11:45.641 "zone_management": false, 00:11:45.641 "zone_append": false, 00:11:45.641 "compare": false, 00:11:45.641 "compare_and_write": false, 00:11:45.641 "abort": false, 00:11:45.641 "seek_hole": false, 00:11:45.641 "seek_data": false, 00:11:45.641 "copy": false, 00:11:45.641 "nvme_iov_md": false 00:11:45.641 }, 00:11:45.641 "memory_domains": [ 00:11:45.641 { 00:11:45.641 "dma_device_id": "system", 00:11:45.641 "dma_device_type": 1 00:11:45.641 }, 00:11:45.641 { 00:11:45.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.641 "dma_device_type": 2 00:11:45.641 }, 00:11:45.641 { 00:11:45.641 "dma_device_id": "system", 00:11:45.641 "dma_device_type": 1 00:11:45.641 }, 00:11:45.641 { 00:11:45.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.641 "dma_device_type": 2 00:11:45.641 }, 00:11:45.641 { 00:11:45.641 "dma_device_id": "system", 00:11:45.641 "dma_device_type": 1 00:11:45.641 }, 00:11:45.641 { 00:11:45.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.641 "dma_device_type": 2 00:11:45.641 }, 00:11:45.641 { 00:11:45.641 "dma_device_id": "system", 00:11:45.641 "dma_device_type": 1 00:11:45.641 }, 00:11:45.641 { 00:11:45.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.641 "dma_device_type": 2 00:11:45.641 } 00:11:45.641 ], 00:11:45.641 "driver_specific": { 00:11:45.641 "raid": { 00:11:45.641 "uuid": "9a54f84c-a924-4bbb-9e50-df6a4f08fcc5", 00:11:45.641 "strip_size_kb": 64, 00:11:45.641 "state": "online", 00:11:45.641 "raid_level": "concat", 00:11:45.641 "superblock": true, 00:11:45.641 "num_base_bdevs": 4, 00:11:45.641 "num_base_bdevs_discovered": 4, 00:11:45.641 "num_base_bdevs_operational": 4, 00:11:45.641 "base_bdevs_list": [ 00:11:45.641 { 00:11:45.641 "name": "pt1", 00:11:45.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.641 "is_configured": true, 00:11:45.641 "data_offset": 2048, 00:11:45.641 "data_size": 63488 00:11:45.641 }, 00:11:45.641 { 00:11:45.641 "name": "pt2", 00:11:45.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.641 "is_configured": true, 00:11:45.641 "data_offset": 2048, 00:11:45.641 "data_size": 63488 00:11:45.641 }, 00:11:45.641 { 00:11:45.641 "name": "pt3", 00:11:45.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.641 "is_configured": true, 00:11:45.641 "data_offset": 2048, 00:11:45.641 "data_size": 63488 00:11:45.641 }, 00:11:45.641 { 00:11:45.641 "name": "pt4", 00:11:45.641 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.641 "is_configured": true, 00:11:45.641 "data_offset": 2048, 00:11:45.641 "data_size": 63488 00:11:45.641 } 00:11:45.641 ] 00:11:45.641 } 00:11:45.641 } 00:11:45.641 }' 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:45.641 pt2 00:11:45.641 pt3 00:11:45.641 pt4' 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.641 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.902 [2024-10-21 09:56:22.431276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9a54f84c-a924-4bbb-9e50-df6a4f08fcc5 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9a54f84c-a924-4bbb-9e50-df6a4f08fcc5 ']' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.902 [2024-10-21 09:56:22.474771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.902 [2024-10-21 09:56:22.474813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.902 [2024-10-21 09:56:22.474929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.902 [2024-10-21 09:56:22.475016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.902 [2024-10-21 09:56:22.475050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.902 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.163 [2024-10-21 09:56:22.634770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:46.163 [2024-10-21 09:56:22.636962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:46.163 [2024-10-21 09:56:22.637025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:46.163 [2024-10-21 09:56:22.637067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:46.163 [2024-10-21 09:56:22.637127] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:46.163 [2024-10-21 09:56:22.637183] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:46.163 [2024-10-21 09:56:22.637205] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:46.163 [2024-10-21 09:56:22.637243] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:46.163 [2024-10-21 09:56:22.637260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.163 [2024-10-21 09:56:22.637274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:11:46.163 request: 00:11:46.163 { 00:11:46.163 "name": "raid_bdev1", 00:11:46.163 "raid_level": "concat", 00:11:46.163 "base_bdevs": [ 00:11:46.163 "malloc1", 00:11:46.163 "malloc2", 00:11:46.163 "malloc3", 00:11:46.163 "malloc4" 00:11:46.163 ], 00:11:46.163 "strip_size_kb": 64, 00:11:46.163 "superblock": false, 00:11:46.163 "method": "bdev_raid_create", 00:11:46.163 "req_id": 1 00:11:46.163 } 00:11:46.163 Got JSON-RPC error response 00:11:46.163 response: 00:11:46.163 { 00:11:46.163 "code": -17, 00:11:46.163 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:46.163 } 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.163 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.163 [2024-10-21 09:56:22.702723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:46.163 [2024-10-21 09:56:22.702801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.163 [2024-10-21 09:56:22.702822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:46.164 [2024-10-21 09:56:22.702853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.164 [2024-10-21 09:56:22.705446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.164 [2024-10-21 09:56:22.705495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:46.164 [2024-10-21 09:56:22.705635] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:46.164 [2024-10-21 09:56:22.705724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:46.164 pt1 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.164 "name": "raid_bdev1", 00:11:46.164 "uuid": "9a54f84c-a924-4bbb-9e50-df6a4f08fcc5", 00:11:46.164 "strip_size_kb": 64, 00:11:46.164 "state": "configuring", 00:11:46.164 "raid_level": "concat", 00:11:46.164 "superblock": true, 00:11:46.164 "num_base_bdevs": 4, 00:11:46.164 "num_base_bdevs_discovered": 1, 00:11:46.164 "num_base_bdevs_operational": 4, 00:11:46.164 "base_bdevs_list": [ 00:11:46.164 { 00:11:46.164 "name": "pt1", 00:11:46.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.164 "is_configured": true, 00:11:46.164 "data_offset": 2048, 00:11:46.164 "data_size": 63488 00:11:46.164 }, 00:11:46.164 { 00:11:46.164 "name": null, 00:11:46.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.164 "is_configured": false, 00:11:46.164 "data_offset": 2048, 00:11:46.164 "data_size": 63488 00:11:46.164 }, 00:11:46.164 { 00:11:46.164 "name": null, 00:11:46.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.164 "is_configured": false, 00:11:46.164 "data_offset": 2048, 00:11:46.164 "data_size": 63488 00:11:46.164 }, 00:11:46.164 { 00:11:46.164 "name": null, 00:11:46.164 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.164 "is_configured": false, 00:11:46.164 "data_offset": 2048, 00:11:46.164 "data_size": 63488 00:11:46.164 } 00:11:46.164 ] 00:11:46.164 }' 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.164 09:56:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.734 [2024-10-21 09:56:23.162751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.734 [2024-10-21 09:56:23.162851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.734 [2024-10-21 09:56:23.162878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:46.734 [2024-10-21 09:56:23.162893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.734 [2024-10-21 09:56:23.163515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.734 [2024-10-21 09:56:23.163558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.734 [2024-10-21 09:56:23.163691] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.734 [2024-10-21 09:56:23.163734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.734 pt2 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.734 [2024-10-21 09:56:23.174801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.734 "name": "raid_bdev1", 00:11:46.734 "uuid": "9a54f84c-a924-4bbb-9e50-df6a4f08fcc5", 00:11:46.734 "strip_size_kb": 64, 00:11:46.734 "state": "configuring", 00:11:46.734 "raid_level": "concat", 00:11:46.734 "superblock": true, 00:11:46.734 "num_base_bdevs": 4, 00:11:46.734 "num_base_bdevs_discovered": 1, 00:11:46.734 "num_base_bdevs_operational": 4, 00:11:46.734 "base_bdevs_list": [ 00:11:46.734 { 00:11:46.734 "name": "pt1", 00:11:46.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.734 "is_configured": true, 00:11:46.734 "data_offset": 2048, 00:11:46.734 "data_size": 63488 00:11:46.734 }, 00:11:46.734 { 00:11:46.734 "name": null, 00:11:46.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.734 "is_configured": false, 00:11:46.734 "data_offset": 0, 00:11:46.734 "data_size": 63488 00:11:46.734 }, 00:11:46.734 { 00:11:46.734 "name": null, 00:11:46.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.734 "is_configured": false, 00:11:46.734 "data_offset": 2048, 00:11:46.734 "data_size": 63488 00:11:46.734 }, 00:11:46.734 { 00:11:46.734 "name": null, 00:11:46.734 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.734 "is_configured": false, 00:11:46.734 "data_offset": 2048, 00:11:46.734 "data_size": 63488 00:11:46.734 } 00:11:46.734 ] 00:11:46.734 }' 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.734 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.304 [2024-10-21 09:56:23.642601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.304 [2024-10-21 09:56:23.642682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.304 [2024-10-21 09:56:23.642710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:47.304 [2024-10-21 09:56:23.642723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.304 [2024-10-21 09:56:23.643362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.304 [2024-10-21 09:56:23.643399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.304 [2024-10-21 09:56:23.643525] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.304 [2024-10-21 09:56:23.643563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.304 pt2 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.304 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.304 [2024-10-21 09:56:23.654524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.304 [2024-10-21 09:56:23.654616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.304 [2024-10-21 09:56:23.654656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:47.304 [2024-10-21 09:56:23.654672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.305 [2024-10-21 09:56:23.655270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.305 [2024-10-21 09:56:23.655308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.305 [2024-10-21 09:56:23.655421] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:47.305 [2024-10-21 09:56:23.655457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.305 pt3 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.305 [2024-10-21 09:56:23.666467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.305 [2024-10-21 09:56:23.666551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.305 [2024-10-21 09:56:23.666591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:47.305 [2024-10-21 09:56:23.666603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.305 [2024-10-21 09:56:23.667178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.305 [2024-10-21 09:56:23.667213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.305 [2024-10-21 09:56:23.667320] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:47.305 [2024-10-21 09:56:23.667355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.305 [2024-10-21 09:56:23.667540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:47.305 [2024-10-21 09:56:23.667560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:47.305 [2024-10-21 09:56:23.667904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:47.305 [2024-10-21 09:56:23.668112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:47.305 [2024-10-21 09:56:23.668140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:47.305 [2024-10-21 09:56:23.668304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.305 pt4 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.305 "name": "raid_bdev1", 00:11:47.305 "uuid": "9a54f84c-a924-4bbb-9e50-df6a4f08fcc5", 00:11:47.305 "strip_size_kb": 64, 00:11:47.305 "state": "online", 00:11:47.305 "raid_level": "concat", 00:11:47.305 "superblock": true, 00:11:47.305 "num_base_bdevs": 4, 00:11:47.305 "num_base_bdevs_discovered": 4, 00:11:47.305 "num_base_bdevs_operational": 4, 00:11:47.305 "base_bdevs_list": [ 00:11:47.305 { 00:11:47.305 "name": "pt1", 00:11:47.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.305 "is_configured": true, 00:11:47.305 "data_offset": 2048, 00:11:47.305 "data_size": 63488 00:11:47.305 }, 00:11:47.305 { 00:11:47.305 "name": "pt2", 00:11:47.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.305 "is_configured": true, 00:11:47.305 "data_offset": 2048, 00:11:47.305 "data_size": 63488 00:11:47.305 }, 00:11:47.305 { 00:11:47.305 "name": "pt3", 00:11:47.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.305 "is_configured": true, 00:11:47.305 "data_offset": 2048, 00:11:47.305 "data_size": 63488 00:11:47.305 }, 00:11:47.305 { 00:11:47.305 "name": "pt4", 00:11:47.305 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.305 "is_configured": true, 00:11:47.305 "data_offset": 2048, 00:11:47.305 "data_size": 63488 00:11:47.305 } 00:11:47.305 ] 00:11:47.305 }' 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.305 09:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.565 [2024-10-21 09:56:24.114166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.565 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.565 "name": "raid_bdev1", 00:11:47.565 "aliases": [ 00:11:47.565 "9a54f84c-a924-4bbb-9e50-df6a4f08fcc5" 00:11:47.565 ], 00:11:47.565 "product_name": "Raid Volume", 00:11:47.565 "block_size": 512, 00:11:47.565 "num_blocks": 253952, 00:11:47.565 "uuid": "9a54f84c-a924-4bbb-9e50-df6a4f08fcc5", 00:11:47.565 "assigned_rate_limits": { 00:11:47.565 "rw_ios_per_sec": 0, 00:11:47.565 "rw_mbytes_per_sec": 0, 00:11:47.565 "r_mbytes_per_sec": 0, 00:11:47.565 "w_mbytes_per_sec": 0 00:11:47.565 }, 00:11:47.565 "claimed": false, 00:11:47.565 "zoned": false, 00:11:47.565 "supported_io_types": { 00:11:47.565 "read": true, 00:11:47.565 "write": true, 00:11:47.565 "unmap": true, 00:11:47.565 "flush": true, 00:11:47.565 "reset": true, 00:11:47.565 "nvme_admin": false, 00:11:47.565 "nvme_io": false, 00:11:47.565 "nvme_io_md": false, 00:11:47.565 "write_zeroes": true, 00:11:47.565 "zcopy": false, 00:11:47.565 "get_zone_info": false, 00:11:47.565 "zone_management": false, 00:11:47.565 "zone_append": false, 00:11:47.565 "compare": false, 00:11:47.565 "compare_and_write": false, 00:11:47.565 "abort": false, 00:11:47.565 "seek_hole": false, 00:11:47.565 "seek_data": false, 00:11:47.565 "copy": false, 00:11:47.565 "nvme_iov_md": false 00:11:47.565 }, 00:11:47.565 "memory_domains": [ 00:11:47.565 { 00:11:47.565 "dma_device_id": "system", 00:11:47.565 "dma_device_type": 1 00:11:47.565 }, 00:11:47.565 { 00:11:47.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.565 "dma_device_type": 2 00:11:47.565 }, 00:11:47.565 { 00:11:47.565 "dma_device_id": "system", 00:11:47.565 "dma_device_type": 1 00:11:47.565 }, 00:11:47.565 { 00:11:47.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.565 "dma_device_type": 2 00:11:47.565 }, 00:11:47.565 { 00:11:47.565 "dma_device_id": "system", 00:11:47.565 "dma_device_type": 1 00:11:47.565 }, 00:11:47.565 { 00:11:47.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.565 "dma_device_type": 2 00:11:47.565 }, 00:11:47.565 { 00:11:47.565 "dma_device_id": "system", 00:11:47.565 "dma_device_type": 1 00:11:47.565 }, 00:11:47.565 { 00:11:47.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.565 "dma_device_type": 2 00:11:47.565 } 00:11:47.565 ], 00:11:47.565 "driver_specific": { 00:11:47.565 "raid": { 00:11:47.565 "uuid": "9a54f84c-a924-4bbb-9e50-df6a4f08fcc5", 00:11:47.565 "strip_size_kb": 64, 00:11:47.565 "state": "online", 00:11:47.565 "raid_level": "concat", 00:11:47.565 "superblock": true, 00:11:47.565 "num_base_bdevs": 4, 00:11:47.565 "num_base_bdevs_discovered": 4, 00:11:47.565 "num_base_bdevs_operational": 4, 00:11:47.565 "base_bdevs_list": [ 00:11:47.565 { 00:11:47.565 "name": "pt1", 00:11:47.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.565 "is_configured": true, 00:11:47.565 "data_offset": 2048, 00:11:47.565 "data_size": 63488 00:11:47.565 }, 00:11:47.565 { 00:11:47.565 "name": "pt2", 00:11:47.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.565 "is_configured": true, 00:11:47.565 "data_offset": 2048, 00:11:47.566 "data_size": 63488 00:11:47.566 }, 00:11:47.566 { 00:11:47.566 "name": "pt3", 00:11:47.566 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.566 "is_configured": true, 00:11:47.566 "data_offset": 2048, 00:11:47.566 "data_size": 63488 00:11:47.566 }, 00:11:47.566 { 00:11:47.566 "name": "pt4", 00:11:47.566 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.566 "is_configured": true, 00:11:47.566 "data_offset": 2048, 00:11:47.566 "data_size": 63488 00:11:47.566 } 00:11:47.566 ] 00:11:47.566 } 00:11:47.566 } 00:11:47.566 }' 00:11:47.566 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:47.826 pt2 00:11:47.826 pt3 00:11:47.826 pt4' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.826 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.086 [2024-10-21 09:56:24.429588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9a54f84c-a924-4bbb-9e50-df6a4f08fcc5 '!=' 9a54f84c-a924-4bbb-9e50-df6a4f08fcc5 ']' 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72208 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72208 ']' 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72208 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72208 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72208' 00:11:48.086 killing process with pid 72208 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72208 00:11:48.086 [2024-10-21 09:56:24.509996] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.086 09:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72208 00:11:48.086 [2024-10-21 09:56:24.510144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.086 [2024-10-21 09:56:24.510272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.086 [2024-10-21 09:56:24.510290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:48.656 [2024-10-21 09:56:24.964540] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.041 09:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:50.041 00:11:50.041 real 0m5.801s 00:11:50.041 user 0m7.952s 00:11:50.041 sys 0m1.211s 00:11:50.041 09:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.041 09:56:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.041 ************************************ 00:11:50.041 END TEST raid_superblock_test 00:11:50.041 ************************************ 00:11:50.041 09:56:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:50.041 09:56:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:50.041 09:56:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.041 09:56:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.041 ************************************ 00:11:50.041 START TEST raid_read_error_test 00:11:50.041 ************************************ 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8jaz6EbqEw 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72473 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72473 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72473 ']' 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.041 09:56:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.041 [2024-10-21 09:56:26.432786] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:50.041 [2024-10-21 09:56:26.432959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72473 ] 00:11:50.041 [2024-10-21 09:56:26.600089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.301 [2024-10-21 09:56:26.745238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.560 [2024-10-21 09:56:27.006631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.560 [2024-10-21 09:56:27.006692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.819 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.819 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:50.819 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.820 BaseBdev1_malloc 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.820 true 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.820 [2024-10-21 09:56:27.401891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:50.820 [2024-10-21 09:56:27.401967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.820 [2024-10-21 09:56:27.401991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:50.820 [2024-10-21 09:56:27.402011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.820 [2024-10-21 09:56:27.404877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.820 [2024-10-21 09:56:27.404930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.820 BaseBdev1 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.820 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.079 BaseBdev2_malloc 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.079 true 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.079 [2024-10-21 09:56:27.478525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:51.079 [2024-10-21 09:56:27.478627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.079 [2024-10-21 09:56:27.478653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:51.079 [2024-10-21 09:56:27.478669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.079 [2024-10-21 09:56:27.481271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.079 [2024-10-21 09:56:27.481320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.079 BaseBdev2 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.079 BaseBdev3_malloc 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.079 true 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.079 [2024-10-21 09:56:27.568353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:51.079 [2024-10-21 09:56:27.568487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.079 [2024-10-21 09:56:27.568517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:51.079 [2024-10-21 09:56:27.568531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.079 [2024-10-21 09:56:27.571083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.079 [2024-10-21 09:56:27.571137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:51.079 BaseBdev3 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.079 BaseBdev4_malloc 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:51.079 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.080 true 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.080 [2024-10-21 09:56:27.645361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:51.080 [2024-10-21 09:56:27.645500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.080 [2024-10-21 09:56:27.645544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:51.080 [2024-10-21 09:56:27.645599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.080 [2024-10-21 09:56:27.648069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.080 [2024-10-21 09:56:27.648166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:51.080 BaseBdev4 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.080 [2024-10-21 09:56:27.657421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.080 [2024-10-21 09:56:27.659734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.080 [2024-10-21 09:56:27.659872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.080 [2024-10-21 09:56:27.659991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:51.080 [2024-10-21 09:56:27.660324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:51.080 [2024-10-21 09:56:27.660391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:51.080 [2024-10-21 09:56:27.660736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:51.080 [2024-10-21 09:56:27.660979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:51.080 [2024-10-21 09:56:27.661030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:51.080 [2024-10-21 09:56:27.661290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.080 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.340 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.340 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.340 "name": "raid_bdev1", 00:11:51.340 "uuid": "3d68e503-81c2-451e-bcfa-da6555e09b7c", 00:11:51.340 "strip_size_kb": 64, 00:11:51.340 "state": "online", 00:11:51.340 "raid_level": "concat", 00:11:51.340 "superblock": true, 00:11:51.340 "num_base_bdevs": 4, 00:11:51.340 "num_base_bdevs_discovered": 4, 00:11:51.340 "num_base_bdevs_operational": 4, 00:11:51.340 "base_bdevs_list": [ 00:11:51.340 { 00:11:51.340 "name": "BaseBdev1", 00:11:51.340 "uuid": "29cce1f1-780f-543f-9ba5-1c7cafafa537", 00:11:51.340 "is_configured": true, 00:11:51.340 "data_offset": 2048, 00:11:51.340 "data_size": 63488 00:11:51.340 }, 00:11:51.340 { 00:11:51.340 "name": "BaseBdev2", 00:11:51.340 "uuid": "3a0204b3-3999-5dbc-8b39-bd9a006408fe", 00:11:51.340 "is_configured": true, 00:11:51.340 "data_offset": 2048, 00:11:51.340 "data_size": 63488 00:11:51.340 }, 00:11:51.340 { 00:11:51.340 "name": "BaseBdev3", 00:11:51.340 "uuid": "8c3b0713-7ff1-5dc1-bbdc-90dd11dffde5", 00:11:51.340 "is_configured": true, 00:11:51.340 "data_offset": 2048, 00:11:51.340 "data_size": 63488 00:11:51.340 }, 00:11:51.340 { 00:11:51.340 "name": "BaseBdev4", 00:11:51.340 "uuid": "60dfd5cc-b5de-5711-8906-899d9bbacdb6", 00:11:51.340 "is_configured": true, 00:11:51.340 "data_offset": 2048, 00:11:51.340 "data_size": 63488 00:11:51.340 } 00:11:51.340 ] 00:11:51.340 }' 00:11:51.340 09:56:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.340 09:56:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.599 09:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:51.599 09:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:51.599 [2024-10-21 09:56:28.190206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.539 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.799 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.799 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.799 "name": "raid_bdev1", 00:11:52.799 "uuid": "3d68e503-81c2-451e-bcfa-da6555e09b7c", 00:11:52.799 "strip_size_kb": 64, 00:11:52.799 "state": "online", 00:11:52.799 "raid_level": "concat", 00:11:52.799 "superblock": true, 00:11:52.799 "num_base_bdevs": 4, 00:11:52.799 "num_base_bdevs_discovered": 4, 00:11:52.799 "num_base_bdevs_operational": 4, 00:11:52.799 "base_bdevs_list": [ 00:11:52.799 { 00:11:52.799 "name": "BaseBdev1", 00:11:52.799 "uuid": "29cce1f1-780f-543f-9ba5-1c7cafafa537", 00:11:52.799 "is_configured": true, 00:11:52.799 "data_offset": 2048, 00:11:52.799 "data_size": 63488 00:11:52.799 }, 00:11:52.799 { 00:11:52.799 "name": "BaseBdev2", 00:11:52.799 "uuid": "3a0204b3-3999-5dbc-8b39-bd9a006408fe", 00:11:52.799 "is_configured": true, 00:11:52.799 "data_offset": 2048, 00:11:52.799 "data_size": 63488 00:11:52.799 }, 00:11:52.799 { 00:11:52.799 "name": "BaseBdev3", 00:11:52.799 "uuid": "8c3b0713-7ff1-5dc1-bbdc-90dd11dffde5", 00:11:52.799 "is_configured": true, 00:11:52.799 "data_offset": 2048, 00:11:52.799 "data_size": 63488 00:11:52.799 }, 00:11:52.799 { 00:11:52.799 "name": "BaseBdev4", 00:11:52.799 "uuid": "60dfd5cc-b5de-5711-8906-899d9bbacdb6", 00:11:52.799 "is_configured": true, 00:11:52.799 "data_offset": 2048, 00:11:52.799 "data_size": 63488 00:11:52.799 } 00:11:52.799 ] 00:11:52.799 }' 00:11:52.799 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.799 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.058 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.058 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.058 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.058 [2024-10-21 09:56:29.563800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.058 [2024-10-21 09:56:29.563901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.059 [2024-10-21 09:56:29.566582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.059 [2024-10-21 09:56:29.566715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.059 [2024-10-21 09:56:29.566794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.059 [2024-10-21 09:56:29.566895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:53.059 { 00:11:53.059 "results": [ 00:11:53.059 { 00:11:53.059 "job": "raid_bdev1", 00:11:53.059 "core_mask": "0x1", 00:11:53.059 "workload": "randrw", 00:11:53.059 "percentage": 50, 00:11:53.059 "status": "finished", 00:11:53.059 "queue_depth": 1, 00:11:53.059 "io_size": 131072, 00:11:53.059 "runtime": 1.373953, 00:11:53.059 "iops": 12512.800656208765, 00:11:53.059 "mibps": 1564.1000820260956, 00:11:53.059 "io_failed": 1, 00:11:53.059 "io_timeout": 0, 00:11:53.059 "avg_latency_us": 112.52184861463624, 00:11:53.059 "min_latency_us": 28.28296943231441, 00:11:53.059 "max_latency_us": 1609.7816593886462 00:11:53.059 } 00:11:53.059 ], 00:11:53.059 "core_count": 1 00:11:53.059 } 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72473 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72473 ']' 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72473 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72473 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.059 killing process with pid 72473 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72473' 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72473 00:11:53.059 [2024-10-21 09:56:29.602608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.059 09:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72473 00:11:53.630 [2024-10-21 09:56:29.963622] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.026 09:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8jaz6EbqEw 00:11:55.026 09:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:55.026 09:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:55.026 09:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:55.026 09:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:55.026 09:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.026 09:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:55.026 09:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:55.026 00:11:55.026 real 0m4.993s 00:11:55.026 user 0m5.739s 00:11:55.026 sys 0m0.748s 00:11:55.026 09:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.026 ************************************ 00:11:55.026 END TEST raid_read_error_test 00:11:55.026 09:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.026 ************************************ 00:11:55.026 09:56:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:55.026 09:56:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:55.026 09:56:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.026 09:56:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.026 ************************************ 00:11:55.026 START TEST raid_write_error_test 00:11:55.026 ************************************ 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6tsqBT8J6N 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72624 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72624 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72624 ']' 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.026 09:56:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.026 [2024-10-21 09:56:31.508773] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:11:55.026 [2024-10-21 09:56:31.508927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72624 ] 00:11:55.285 [2024-10-21 09:56:31.676002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.285 [2024-10-21 09:56:31.825590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.544 [2024-10-21 09:56:32.079385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.544 [2024-10-21 09:56:32.079439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.803 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.803 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:55.803 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:55.803 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:55.803 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.803 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.063 BaseBdev1_malloc 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.063 true 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.063 [2024-10-21 09:56:32.439303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:56.063 [2024-10-21 09:56:32.439482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.063 [2024-10-21 09:56:32.439528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:56.063 [2024-10-21 09:56:32.439589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.063 [2024-10-21 09:56:32.442145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.063 [2024-10-21 09:56:32.442197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:56.063 BaseBdev1 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.063 BaseBdev2_malloc 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.063 true 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.063 [2024-10-21 09:56:32.520188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:56.063 [2024-10-21 09:56:32.520376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.063 [2024-10-21 09:56:32.520421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:56.063 [2024-10-21 09:56:32.520467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.063 [2024-10-21 09:56:32.523197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.063 [2024-10-21 09:56:32.523309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:56.063 BaseBdev2 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.063 BaseBdev3_malloc 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.063 true 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.063 [2024-10-21 09:56:32.614077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:56.063 [2024-10-21 09:56:32.614183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.063 [2024-10-21 09:56:32.614212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:56.063 [2024-10-21 09:56:32.614229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.063 [2024-10-21 09:56:32.616898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.063 [2024-10-21 09:56:32.616951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:56.063 BaseBdev3 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.063 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.324 BaseBdev4_malloc 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.324 true 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.324 [2024-10-21 09:56:32.694078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:56.324 [2024-10-21 09:56:32.694263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.324 [2024-10-21 09:56:32.694310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:56.324 [2024-10-21 09:56:32.694353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.324 [2024-10-21 09:56:32.696978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.324 [2024-10-21 09:56:32.697086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:56.324 BaseBdev4 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.324 [2024-10-21 09:56:32.706147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.324 [2024-10-21 09:56:32.708406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.324 [2024-10-21 09:56:32.708548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.324 [2024-10-21 09:56:32.708641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.324 [2024-10-21 09:56:32.708918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:56.324 [2024-10-21 09:56:32.708936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:56.324 [2024-10-21 09:56:32.709267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:56.324 [2024-10-21 09:56:32.709468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:56.324 [2024-10-21 09:56:32.709487] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:56.324 [2024-10-21 09:56:32.709752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.324 "name": "raid_bdev1", 00:11:56.324 "uuid": "a2ebbad8-0aa9-41c3-856f-6b64a33f05a6", 00:11:56.324 "strip_size_kb": 64, 00:11:56.324 "state": "online", 00:11:56.324 "raid_level": "concat", 00:11:56.324 "superblock": true, 00:11:56.324 "num_base_bdevs": 4, 00:11:56.324 "num_base_bdevs_discovered": 4, 00:11:56.324 "num_base_bdevs_operational": 4, 00:11:56.324 "base_bdevs_list": [ 00:11:56.324 { 00:11:56.324 "name": "BaseBdev1", 00:11:56.324 "uuid": "da05333a-2c44-55c9-bdce-ee532dcb9327", 00:11:56.324 "is_configured": true, 00:11:56.324 "data_offset": 2048, 00:11:56.324 "data_size": 63488 00:11:56.324 }, 00:11:56.324 { 00:11:56.324 "name": "BaseBdev2", 00:11:56.324 "uuid": "0cc7b6ae-47f2-53df-8033-0b0fc357f962", 00:11:56.324 "is_configured": true, 00:11:56.324 "data_offset": 2048, 00:11:56.324 "data_size": 63488 00:11:56.324 }, 00:11:56.324 { 00:11:56.324 "name": "BaseBdev3", 00:11:56.324 "uuid": "e2e1b892-b841-54b8-9537-5218064ee62a", 00:11:56.324 "is_configured": true, 00:11:56.324 "data_offset": 2048, 00:11:56.324 "data_size": 63488 00:11:56.324 }, 00:11:56.324 { 00:11:56.324 "name": "BaseBdev4", 00:11:56.324 "uuid": "b9cf6a47-a7b7-5e29-bbdc-a91565435317", 00:11:56.324 "is_configured": true, 00:11:56.324 "data_offset": 2048, 00:11:56.324 "data_size": 63488 00:11:56.324 } 00:11:56.324 ] 00:11:56.324 }' 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.324 09:56:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.584 09:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:56.584 09:56:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:56.844 [2024-10-21 09:56:33.222984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.781 "name": "raid_bdev1", 00:11:57.781 "uuid": "a2ebbad8-0aa9-41c3-856f-6b64a33f05a6", 00:11:57.781 "strip_size_kb": 64, 00:11:57.781 "state": "online", 00:11:57.781 "raid_level": "concat", 00:11:57.781 "superblock": true, 00:11:57.781 "num_base_bdevs": 4, 00:11:57.781 "num_base_bdevs_discovered": 4, 00:11:57.781 "num_base_bdevs_operational": 4, 00:11:57.781 "base_bdevs_list": [ 00:11:57.781 { 00:11:57.781 "name": "BaseBdev1", 00:11:57.781 "uuid": "da05333a-2c44-55c9-bdce-ee532dcb9327", 00:11:57.781 "is_configured": true, 00:11:57.781 "data_offset": 2048, 00:11:57.781 "data_size": 63488 00:11:57.781 }, 00:11:57.781 { 00:11:57.781 "name": "BaseBdev2", 00:11:57.781 "uuid": "0cc7b6ae-47f2-53df-8033-0b0fc357f962", 00:11:57.781 "is_configured": true, 00:11:57.781 "data_offset": 2048, 00:11:57.781 "data_size": 63488 00:11:57.781 }, 00:11:57.781 { 00:11:57.781 "name": "BaseBdev3", 00:11:57.781 "uuid": "e2e1b892-b841-54b8-9537-5218064ee62a", 00:11:57.781 "is_configured": true, 00:11:57.781 "data_offset": 2048, 00:11:57.781 "data_size": 63488 00:11:57.781 }, 00:11:57.781 { 00:11:57.781 "name": "BaseBdev4", 00:11:57.781 "uuid": "b9cf6a47-a7b7-5e29-bbdc-a91565435317", 00:11:57.781 "is_configured": true, 00:11:57.781 "data_offset": 2048, 00:11:57.781 "data_size": 63488 00:11:57.781 } 00:11:57.781 ] 00:11:57.781 }' 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.781 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.350 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.350 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.350 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.350 [2024-10-21 09:56:34.657126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.350 [2024-10-21 09:56:34.657274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.350 [2024-10-21 09:56:34.660365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.350 [2024-10-21 09:56:34.660493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.350 [2024-10-21 09:56:34.660586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.350 [2024-10-21 09:56:34.660654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:58.350 { 00:11:58.350 "results": [ 00:11:58.350 { 00:11:58.350 "job": "raid_bdev1", 00:11:58.350 "core_mask": "0x1", 00:11:58.350 "workload": "randrw", 00:11:58.350 "percentage": 50, 00:11:58.350 "status": "finished", 00:11:58.350 "queue_depth": 1, 00:11:58.350 "io_size": 131072, 00:11:58.350 "runtime": 1.434806, 00:11:58.350 "iops": 12590.552311601708, 00:11:58.350 "mibps": 1573.8190389502136, 00:11:58.350 "io_failed": 1, 00:11:58.350 "io_timeout": 0, 00:11:58.350 "avg_latency_us": 111.72968054542369, 00:11:58.350 "min_latency_us": 28.05938864628821, 00:11:58.350 "max_latency_us": 1674.172925764192 00:11:58.350 } 00:11:58.350 ], 00:11:58.350 "core_count": 1 00:11:58.351 } 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72624 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72624 ']' 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72624 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72624 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72624' 00:11:58.351 killing process with pid 72624 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72624 00:11:58.351 [2024-10-21 09:56:34.707731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.351 09:56:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72624 00:11:58.610 [2024-10-21 09:56:35.081100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.990 09:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6tsqBT8J6N 00:11:59.990 09:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:59.990 09:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:59.990 09:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:59.990 09:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:59.990 09:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.990 09:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:59.990 09:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:59.990 ************************************ 00:11:59.990 END TEST raid_write_error_test 00:11:59.990 ************************************ 00:11:59.990 00:11:59.990 real 0m5.066s 00:11:59.990 user 0m5.811s 00:11:59.990 sys 0m0.742s 00:11:59.990 09:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.990 09:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 09:56:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:59.990 09:56:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:59.990 09:56:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:59.990 09:56:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.990 09:56:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.990 ************************************ 00:11:59.990 START TEST raid_state_function_test 00:11:59.990 ************************************ 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:59.990 Process raid pid: 72768 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72768 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72768' 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72768 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72768 ']' 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:59.990 09:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.249 [2024-10-21 09:56:36.640332] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:12:00.249 [2024-10-21 09:56:36.640569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.249 [2024-10-21 09:56:36.815092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.509 [2024-10-21 09:56:36.975637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.768 [2024-10-21 09:56:37.254651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.768 [2024-10-21 09:56:37.254699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.051 [2024-10-21 09:56:37.515423] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.051 [2024-10-21 09:56:37.515607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.051 [2024-10-21 09:56:37.515675] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.051 [2024-10-21 09:56:37.515711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.051 [2024-10-21 09:56:37.515762] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.051 [2024-10-21 09:56:37.515812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.051 [2024-10-21 09:56:37.515849] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.051 [2024-10-21 09:56:37.515897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.051 "name": "Existed_Raid", 00:12:01.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.051 "strip_size_kb": 0, 00:12:01.051 "state": "configuring", 00:12:01.051 "raid_level": "raid1", 00:12:01.051 "superblock": false, 00:12:01.051 "num_base_bdevs": 4, 00:12:01.051 "num_base_bdevs_discovered": 0, 00:12:01.051 "num_base_bdevs_operational": 4, 00:12:01.051 "base_bdevs_list": [ 00:12:01.051 { 00:12:01.051 "name": "BaseBdev1", 00:12:01.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.051 "is_configured": false, 00:12:01.051 "data_offset": 0, 00:12:01.051 "data_size": 0 00:12:01.051 }, 00:12:01.051 { 00:12:01.051 "name": "BaseBdev2", 00:12:01.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.051 "is_configured": false, 00:12:01.051 "data_offset": 0, 00:12:01.051 "data_size": 0 00:12:01.051 }, 00:12:01.051 { 00:12:01.051 "name": "BaseBdev3", 00:12:01.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.051 "is_configured": false, 00:12:01.051 "data_offset": 0, 00:12:01.051 "data_size": 0 00:12:01.051 }, 00:12:01.051 { 00:12:01.051 "name": "BaseBdev4", 00:12:01.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.051 "is_configured": false, 00:12:01.051 "data_offset": 0, 00:12:01.051 "data_size": 0 00:12:01.051 } 00:12:01.051 ] 00:12:01.051 }' 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.051 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.631 09:56:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.631 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.631 09:56:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.631 [2024-10-21 09:56:37.998827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.631 [2024-10-21 09:56:37.999002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.631 [2024-10-21 09:56:38.010793] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.631 [2024-10-21 09:56:38.010911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.631 [2024-10-21 09:56:38.010951] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.631 [2024-10-21 09:56:38.010982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.631 [2024-10-21 09:56:38.011024] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.631 [2024-10-21 09:56:38.011063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.631 [2024-10-21 09:56:38.011101] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.631 [2024-10-21 09:56:38.011132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.631 [2024-10-21 09:56:38.067509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.631 BaseBdev1 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.631 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.631 [ 00:12:01.631 { 00:12:01.631 "name": "BaseBdev1", 00:12:01.631 "aliases": [ 00:12:01.631 "bf267c76-a43f-4956-ba11-ad9cf765eb4b" 00:12:01.631 ], 00:12:01.631 "product_name": "Malloc disk", 00:12:01.631 "block_size": 512, 00:12:01.631 "num_blocks": 65536, 00:12:01.631 "uuid": "bf267c76-a43f-4956-ba11-ad9cf765eb4b", 00:12:01.631 "assigned_rate_limits": { 00:12:01.631 "rw_ios_per_sec": 0, 00:12:01.631 "rw_mbytes_per_sec": 0, 00:12:01.631 "r_mbytes_per_sec": 0, 00:12:01.631 "w_mbytes_per_sec": 0 00:12:01.631 }, 00:12:01.631 "claimed": true, 00:12:01.631 "claim_type": "exclusive_write", 00:12:01.631 "zoned": false, 00:12:01.631 "supported_io_types": { 00:12:01.631 "read": true, 00:12:01.631 "write": true, 00:12:01.631 "unmap": true, 00:12:01.631 "flush": true, 00:12:01.631 "reset": true, 00:12:01.631 "nvme_admin": false, 00:12:01.631 "nvme_io": false, 00:12:01.631 "nvme_io_md": false, 00:12:01.631 "write_zeroes": true, 00:12:01.631 "zcopy": true, 00:12:01.631 "get_zone_info": false, 00:12:01.631 "zone_management": false, 00:12:01.631 "zone_append": false, 00:12:01.631 "compare": false, 00:12:01.631 "compare_and_write": false, 00:12:01.631 "abort": true, 00:12:01.631 "seek_hole": false, 00:12:01.631 "seek_data": false, 00:12:01.631 "copy": true, 00:12:01.631 "nvme_iov_md": false 00:12:01.631 }, 00:12:01.631 "memory_domains": [ 00:12:01.631 { 00:12:01.631 "dma_device_id": "system", 00:12:01.631 "dma_device_type": 1 00:12:01.631 }, 00:12:01.631 { 00:12:01.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.632 "dma_device_type": 2 00:12:01.632 } 00:12:01.632 ], 00:12:01.632 "driver_specific": {} 00:12:01.632 } 00:12:01.632 ] 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.632 "name": "Existed_Raid", 00:12:01.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.632 "strip_size_kb": 0, 00:12:01.632 "state": "configuring", 00:12:01.632 "raid_level": "raid1", 00:12:01.632 "superblock": false, 00:12:01.632 "num_base_bdevs": 4, 00:12:01.632 "num_base_bdevs_discovered": 1, 00:12:01.632 "num_base_bdevs_operational": 4, 00:12:01.632 "base_bdevs_list": [ 00:12:01.632 { 00:12:01.632 "name": "BaseBdev1", 00:12:01.632 "uuid": "bf267c76-a43f-4956-ba11-ad9cf765eb4b", 00:12:01.632 "is_configured": true, 00:12:01.632 "data_offset": 0, 00:12:01.632 "data_size": 65536 00:12:01.632 }, 00:12:01.632 { 00:12:01.632 "name": "BaseBdev2", 00:12:01.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.632 "is_configured": false, 00:12:01.632 "data_offset": 0, 00:12:01.632 "data_size": 0 00:12:01.632 }, 00:12:01.632 { 00:12:01.632 "name": "BaseBdev3", 00:12:01.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.632 "is_configured": false, 00:12:01.632 "data_offset": 0, 00:12:01.632 "data_size": 0 00:12:01.632 }, 00:12:01.632 { 00:12:01.632 "name": "BaseBdev4", 00:12:01.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.632 "is_configured": false, 00:12:01.632 "data_offset": 0, 00:12:01.632 "data_size": 0 00:12:01.632 } 00:12:01.632 ] 00:12:01.632 }' 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.632 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.199 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.199 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.199 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.199 [2024-10-21 09:56:38.550784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.199 [2024-10-21 09:56:38.550970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:12:02.199 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.199 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.199 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.199 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.199 [2024-10-21 09:56:38.562960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.199 [2024-10-21 09:56:38.565530] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.200 [2024-10-21 09:56:38.565658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.200 [2024-10-21 09:56:38.565701] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.200 [2024-10-21 09:56:38.565720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.200 [2024-10-21 09:56:38.565730] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.200 [2024-10-21 09:56:38.565742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.200 "name": "Existed_Raid", 00:12:02.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.200 "strip_size_kb": 0, 00:12:02.200 "state": "configuring", 00:12:02.200 "raid_level": "raid1", 00:12:02.200 "superblock": false, 00:12:02.200 "num_base_bdevs": 4, 00:12:02.200 "num_base_bdevs_discovered": 1, 00:12:02.200 "num_base_bdevs_operational": 4, 00:12:02.200 "base_bdevs_list": [ 00:12:02.200 { 00:12:02.200 "name": "BaseBdev1", 00:12:02.200 "uuid": "bf267c76-a43f-4956-ba11-ad9cf765eb4b", 00:12:02.200 "is_configured": true, 00:12:02.200 "data_offset": 0, 00:12:02.200 "data_size": 65536 00:12:02.200 }, 00:12:02.200 { 00:12:02.200 "name": "BaseBdev2", 00:12:02.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.200 "is_configured": false, 00:12:02.200 "data_offset": 0, 00:12:02.200 "data_size": 0 00:12:02.200 }, 00:12:02.200 { 00:12:02.200 "name": "BaseBdev3", 00:12:02.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.200 "is_configured": false, 00:12:02.200 "data_offset": 0, 00:12:02.200 "data_size": 0 00:12:02.200 }, 00:12:02.200 { 00:12:02.200 "name": "BaseBdev4", 00:12:02.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.200 "is_configured": false, 00:12:02.200 "data_offset": 0, 00:12:02.200 "data_size": 0 00:12:02.200 } 00:12:02.200 ] 00:12:02.200 }' 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.200 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.459 09:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:02.459 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.459 09:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.459 [2024-10-21 09:56:39.010852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.459 BaseBdev2 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.459 [ 00:12:02.459 { 00:12:02.459 "name": "BaseBdev2", 00:12:02.459 "aliases": [ 00:12:02.459 "dd8bf7c1-694a-4cbe-9422-698be14c2dba" 00:12:02.459 ], 00:12:02.459 "product_name": "Malloc disk", 00:12:02.459 "block_size": 512, 00:12:02.459 "num_blocks": 65536, 00:12:02.459 "uuid": "dd8bf7c1-694a-4cbe-9422-698be14c2dba", 00:12:02.459 "assigned_rate_limits": { 00:12:02.459 "rw_ios_per_sec": 0, 00:12:02.459 "rw_mbytes_per_sec": 0, 00:12:02.459 "r_mbytes_per_sec": 0, 00:12:02.459 "w_mbytes_per_sec": 0 00:12:02.459 }, 00:12:02.459 "claimed": true, 00:12:02.459 "claim_type": "exclusive_write", 00:12:02.459 "zoned": false, 00:12:02.459 "supported_io_types": { 00:12:02.459 "read": true, 00:12:02.459 "write": true, 00:12:02.459 "unmap": true, 00:12:02.459 "flush": true, 00:12:02.459 "reset": true, 00:12:02.459 "nvme_admin": false, 00:12:02.459 "nvme_io": false, 00:12:02.459 "nvme_io_md": false, 00:12:02.459 "write_zeroes": true, 00:12:02.459 "zcopy": true, 00:12:02.459 "get_zone_info": false, 00:12:02.459 "zone_management": false, 00:12:02.459 "zone_append": false, 00:12:02.459 "compare": false, 00:12:02.459 "compare_and_write": false, 00:12:02.459 "abort": true, 00:12:02.459 "seek_hole": false, 00:12:02.459 "seek_data": false, 00:12:02.459 "copy": true, 00:12:02.459 "nvme_iov_md": false 00:12:02.459 }, 00:12:02.459 "memory_domains": [ 00:12:02.459 { 00:12:02.459 "dma_device_id": "system", 00:12:02.459 "dma_device_type": 1 00:12:02.459 }, 00:12:02.459 { 00:12:02.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.459 "dma_device_type": 2 00:12:02.459 } 00:12:02.459 ], 00:12:02.459 "driver_specific": {} 00:12:02.459 } 00:12:02.459 ] 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.459 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.718 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.718 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.718 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.718 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.718 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.718 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.718 "name": "Existed_Raid", 00:12:02.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.718 "strip_size_kb": 0, 00:12:02.718 "state": "configuring", 00:12:02.718 "raid_level": "raid1", 00:12:02.718 "superblock": false, 00:12:02.718 "num_base_bdevs": 4, 00:12:02.718 "num_base_bdevs_discovered": 2, 00:12:02.718 "num_base_bdevs_operational": 4, 00:12:02.718 "base_bdevs_list": [ 00:12:02.718 { 00:12:02.718 "name": "BaseBdev1", 00:12:02.718 "uuid": "bf267c76-a43f-4956-ba11-ad9cf765eb4b", 00:12:02.718 "is_configured": true, 00:12:02.718 "data_offset": 0, 00:12:02.718 "data_size": 65536 00:12:02.718 }, 00:12:02.718 { 00:12:02.718 "name": "BaseBdev2", 00:12:02.718 "uuid": "dd8bf7c1-694a-4cbe-9422-698be14c2dba", 00:12:02.718 "is_configured": true, 00:12:02.718 "data_offset": 0, 00:12:02.718 "data_size": 65536 00:12:02.718 }, 00:12:02.718 { 00:12:02.718 "name": "BaseBdev3", 00:12:02.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.718 "is_configured": false, 00:12:02.718 "data_offset": 0, 00:12:02.718 "data_size": 0 00:12:02.718 }, 00:12:02.719 { 00:12:02.719 "name": "BaseBdev4", 00:12:02.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.719 "is_configured": false, 00:12:02.719 "data_offset": 0, 00:12:02.719 "data_size": 0 00:12:02.719 } 00:12:02.719 ] 00:12:02.719 }' 00:12:02.719 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.719 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.977 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:02.977 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.977 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.237 [2024-10-21 09:56:39.586263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.237 BaseBdev3 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.237 [ 00:12:03.237 { 00:12:03.237 "name": "BaseBdev3", 00:12:03.237 "aliases": [ 00:12:03.237 "a5432b53-9c71-443c-a8d8-b8ee9082e9a1" 00:12:03.237 ], 00:12:03.237 "product_name": "Malloc disk", 00:12:03.237 "block_size": 512, 00:12:03.237 "num_blocks": 65536, 00:12:03.237 "uuid": "a5432b53-9c71-443c-a8d8-b8ee9082e9a1", 00:12:03.237 "assigned_rate_limits": { 00:12:03.237 "rw_ios_per_sec": 0, 00:12:03.237 "rw_mbytes_per_sec": 0, 00:12:03.237 "r_mbytes_per_sec": 0, 00:12:03.237 "w_mbytes_per_sec": 0 00:12:03.237 }, 00:12:03.237 "claimed": true, 00:12:03.237 "claim_type": "exclusive_write", 00:12:03.237 "zoned": false, 00:12:03.237 "supported_io_types": { 00:12:03.237 "read": true, 00:12:03.237 "write": true, 00:12:03.237 "unmap": true, 00:12:03.237 "flush": true, 00:12:03.237 "reset": true, 00:12:03.237 "nvme_admin": false, 00:12:03.237 "nvme_io": false, 00:12:03.237 "nvme_io_md": false, 00:12:03.237 "write_zeroes": true, 00:12:03.237 "zcopy": true, 00:12:03.237 "get_zone_info": false, 00:12:03.237 "zone_management": false, 00:12:03.237 "zone_append": false, 00:12:03.237 "compare": false, 00:12:03.237 "compare_and_write": false, 00:12:03.237 "abort": true, 00:12:03.237 "seek_hole": false, 00:12:03.237 "seek_data": false, 00:12:03.237 "copy": true, 00:12:03.237 "nvme_iov_md": false 00:12:03.237 }, 00:12:03.237 "memory_domains": [ 00:12:03.237 { 00:12:03.237 "dma_device_id": "system", 00:12:03.237 "dma_device_type": 1 00:12:03.237 }, 00:12:03.237 { 00:12:03.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.237 "dma_device_type": 2 00:12:03.237 } 00:12:03.237 ], 00:12:03.237 "driver_specific": {} 00:12:03.237 } 00:12:03.237 ] 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.237 "name": "Existed_Raid", 00:12:03.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.237 "strip_size_kb": 0, 00:12:03.237 "state": "configuring", 00:12:03.237 "raid_level": "raid1", 00:12:03.237 "superblock": false, 00:12:03.237 "num_base_bdevs": 4, 00:12:03.237 "num_base_bdevs_discovered": 3, 00:12:03.237 "num_base_bdevs_operational": 4, 00:12:03.237 "base_bdevs_list": [ 00:12:03.237 { 00:12:03.237 "name": "BaseBdev1", 00:12:03.237 "uuid": "bf267c76-a43f-4956-ba11-ad9cf765eb4b", 00:12:03.237 "is_configured": true, 00:12:03.237 "data_offset": 0, 00:12:03.237 "data_size": 65536 00:12:03.237 }, 00:12:03.237 { 00:12:03.237 "name": "BaseBdev2", 00:12:03.237 "uuid": "dd8bf7c1-694a-4cbe-9422-698be14c2dba", 00:12:03.237 "is_configured": true, 00:12:03.237 "data_offset": 0, 00:12:03.237 "data_size": 65536 00:12:03.237 }, 00:12:03.237 { 00:12:03.237 "name": "BaseBdev3", 00:12:03.237 "uuid": "a5432b53-9c71-443c-a8d8-b8ee9082e9a1", 00:12:03.237 "is_configured": true, 00:12:03.237 "data_offset": 0, 00:12:03.237 "data_size": 65536 00:12:03.237 }, 00:12:03.237 { 00:12:03.237 "name": "BaseBdev4", 00:12:03.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.237 "is_configured": false, 00:12:03.237 "data_offset": 0, 00:12:03.237 "data_size": 0 00:12:03.237 } 00:12:03.237 ] 00:12:03.237 }' 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.237 09:56:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.806 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:03.806 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.806 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.806 [2024-10-21 09:56:40.142835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.806 [2024-10-21 09:56:40.143055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:03.806 [2024-10-21 09:56:40.143086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:03.806 [2024-10-21 09:56:40.143460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:03.806 [2024-10-21 09:56:40.143732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:03.806 [2024-10-21 09:56:40.143791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:12:03.806 [2024-10-21 09:56:40.144159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.806 BaseBdev4 00:12:03.806 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.806 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:03.806 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:03.806 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:03.806 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:03.806 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:03.806 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.807 [ 00:12:03.807 { 00:12:03.807 "name": "BaseBdev4", 00:12:03.807 "aliases": [ 00:12:03.807 "11d32a62-da4a-4dae-b2c9-f61c5a8e2228" 00:12:03.807 ], 00:12:03.807 "product_name": "Malloc disk", 00:12:03.807 "block_size": 512, 00:12:03.807 "num_blocks": 65536, 00:12:03.807 "uuid": "11d32a62-da4a-4dae-b2c9-f61c5a8e2228", 00:12:03.807 "assigned_rate_limits": { 00:12:03.807 "rw_ios_per_sec": 0, 00:12:03.807 "rw_mbytes_per_sec": 0, 00:12:03.807 "r_mbytes_per_sec": 0, 00:12:03.807 "w_mbytes_per_sec": 0 00:12:03.807 }, 00:12:03.807 "claimed": true, 00:12:03.807 "claim_type": "exclusive_write", 00:12:03.807 "zoned": false, 00:12:03.807 "supported_io_types": { 00:12:03.807 "read": true, 00:12:03.807 "write": true, 00:12:03.807 "unmap": true, 00:12:03.807 "flush": true, 00:12:03.807 "reset": true, 00:12:03.807 "nvme_admin": false, 00:12:03.807 "nvme_io": false, 00:12:03.807 "nvme_io_md": false, 00:12:03.807 "write_zeroes": true, 00:12:03.807 "zcopy": true, 00:12:03.807 "get_zone_info": false, 00:12:03.807 "zone_management": false, 00:12:03.807 "zone_append": false, 00:12:03.807 "compare": false, 00:12:03.807 "compare_and_write": false, 00:12:03.807 "abort": true, 00:12:03.807 "seek_hole": false, 00:12:03.807 "seek_data": false, 00:12:03.807 "copy": true, 00:12:03.807 "nvme_iov_md": false 00:12:03.807 }, 00:12:03.807 "memory_domains": [ 00:12:03.807 { 00:12:03.807 "dma_device_id": "system", 00:12:03.807 "dma_device_type": 1 00:12:03.807 }, 00:12:03.807 { 00:12:03.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.807 "dma_device_type": 2 00:12:03.807 } 00:12:03.807 ], 00:12:03.807 "driver_specific": {} 00:12:03.807 } 00:12:03.807 ] 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.807 "name": "Existed_Raid", 00:12:03.807 "uuid": "803cba57-61e9-449f-9f46-bb4adf751d7d", 00:12:03.807 "strip_size_kb": 0, 00:12:03.807 "state": "online", 00:12:03.807 "raid_level": "raid1", 00:12:03.807 "superblock": false, 00:12:03.807 "num_base_bdevs": 4, 00:12:03.807 "num_base_bdevs_discovered": 4, 00:12:03.807 "num_base_bdevs_operational": 4, 00:12:03.807 "base_bdevs_list": [ 00:12:03.807 { 00:12:03.807 "name": "BaseBdev1", 00:12:03.807 "uuid": "bf267c76-a43f-4956-ba11-ad9cf765eb4b", 00:12:03.807 "is_configured": true, 00:12:03.807 "data_offset": 0, 00:12:03.807 "data_size": 65536 00:12:03.807 }, 00:12:03.807 { 00:12:03.807 "name": "BaseBdev2", 00:12:03.807 "uuid": "dd8bf7c1-694a-4cbe-9422-698be14c2dba", 00:12:03.807 "is_configured": true, 00:12:03.807 "data_offset": 0, 00:12:03.807 "data_size": 65536 00:12:03.807 }, 00:12:03.807 { 00:12:03.807 "name": "BaseBdev3", 00:12:03.807 "uuid": "a5432b53-9c71-443c-a8d8-b8ee9082e9a1", 00:12:03.807 "is_configured": true, 00:12:03.807 "data_offset": 0, 00:12:03.807 "data_size": 65536 00:12:03.807 }, 00:12:03.807 { 00:12:03.807 "name": "BaseBdev4", 00:12:03.807 "uuid": "11d32a62-da4a-4dae-b2c9-f61c5a8e2228", 00:12:03.807 "is_configured": true, 00:12:03.807 "data_offset": 0, 00:12:03.807 "data_size": 65536 00:12:03.807 } 00:12:03.807 ] 00:12:03.807 }' 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.807 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:04.375 [2024-10-21 09:56:40.699107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:04.375 "name": "Existed_Raid", 00:12:04.375 "aliases": [ 00:12:04.375 "803cba57-61e9-449f-9f46-bb4adf751d7d" 00:12:04.375 ], 00:12:04.375 "product_name": "Raid Volume", 00:12:04.375 "block_size": 512, 00:12:04.375 "num_blocks": 65536, 00:12:04.375 "uuid": "803cba57-61e9-449f-9f46-bb4adf751d7d", 00:12:04.375 "assigned_rate_limits": { 00:12:04.375 "rw_ios_per_sec": 0, 00:12:04.375 "rw_mbytes_per_sec": 0, 00:12:04.375 "r_mbytes_per_sec": 0, 00:12:04.375 "w_mbytes_per_sec": 0 00:12:04.375 }, 00:12:04.375 "claimed": false, 00:12:04.375 "zoned": false, 00:12:04.375 "supported_io_types": { 00:12:04.375 "read": true, 00:12:04.375 "write": true, 00:12:04.375 "unmap": false, 00:12:04.375 "flush": false, 00:12:04.375 "reset": true, 00:12:04.375 "nvme_admin": false, 00:12:04.375 "nvme_io": false, 00:12:04.375 "nvme_io_md": false, 00:12:04.375 "write_zeroes": true, 00:12:04.375 "zcopy": false, 00:12:04.375 "get_zone_info": false, 00:12:04.375 "zone_management": false, 00:12:04.375 "zone_append": false, 00:12:04.375 "compare": false, 00:12:04.375 "compare_and_write": false, 00:12:04.375 "abort": false, 00:12:04.375 "seek_hole": false, 00:12:04.375 "seek_data": false, 00:12:04.375 "copy": false, 00:12:04.375 "nvme_iov_md": false 00:12:04.375 }, 00:12:04.375 "memory_domains": [ 00:12:04.375 { 00:12:04.375 "dma_device_id": "system", 00:12:04.375 "dma_device_type": 1 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.375 "dma_device_type": 2 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "dma_device_id": "system", 00:12:04.375 "dma_device_type": 1 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.375 "dma_device_type": 2 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "dma_device_id": "system", 00:12:04.375 "dma_device_type": 1 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.375 "dma_device_type": 2 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "dma_device_id": "system", 00:12:04.375 "dma_device_type": 1 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.375 "dma_device_type": 2 00:12:04.375 } 00:12:04.375 ], 00:12:04.375 "driver_specific": { 00:12:04.375 "raid": { 00:12:04.375 "uuid": "803cba57-61e9-449f-9f46-bb4adf751d7d", 00:12:04.375 "strip_size_kb": 0, 00:12:04.375 "state": "online", 00:12:04.375 "raid_level": "raid1", 00:12:04.375 "superblock": false, 00:12:04.375 "num_base_bdevs": 4, 00:12:04.375 "num_base_bdevs_discovered": 4, 00:12:04.375 "num_base_bdevs_operational": 4, 00:12:04.375 "base_bdevs_list": [ 00:12:04.375 { 00:12:04.375 "name": "BaseBdev1", 00:12:04.375 "uuid": "bf267c76-a43f-4956-ba11-ad9cf765eb4b", 00:12:04.375 "is_configured": true, 00:12:04.375 "data_offset": 0, 00:12:04.375 "data_size": 65536 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "name": "BaseBdev2", 00:12:04.375 "uuid": "dd8bf7c1-694a-4cbe-9422-698be14c2dba", 00:12:04.375 "is_configured": true, 00:12:04.375 "data_offset": 0, 00:12:04.375 "data_size": 65536 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "name": "BaseBdev3", 00:12:04.375 "uuid": "a5432b53-9c71-443c-a8d8-b8ee9082e9a1", 00:12:04.375 "is_configured": true, 00:12:04.375 "data_offset": 0, 00:12:04.375 "data_size": 65536 00:12:04.375 }, 00:12:04.375 { 00:12:04.375 "name": "BaseBdev4", 00:12:04.375 "uuid": "11d32a62-da4a-4dae-b2c9-f61c5a8e2228", 00:12:04.375 "is_configured": true, 00:12:04.375 "data_offset": 0, 00:12:04.375 "data_size": 65536 00:12:04.375 } 00:12:04.375 ] 00:12:04.375 } 00:12:04.375 } 00:12:04.375 }' 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:04.375 BaseBdev2 00:12:04.375 BaseBdev3 00:12:04.375 BaseBdev4' 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.375 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.376 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.635 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.635 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.635 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.635 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:04.635 09:56:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.635 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.635 09:56:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.635 [2024-10-21 09:56:41.030849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.635 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.636 "name": "Existed_Raid", 00:12:04.636 "uuid": "803cba57-61e9-449f-9f46-bb4adf751d7d", 00:12:04.636 "strip_size_kb": 0, 00:12:04.636 "state": "online", 00:12:04.636 "raid_level": "raid1", 00:12:04.636 "superblock": false, 00:12:04.636 "num_base_bdevs": 4, 00:12:04.636 "num_base_bdevs_discovered": 3, 00:12:04.636 "num_base_bdevs_operational": 3, 00:12:04.636 "base_bdevs_list": [ 00:12:04.636 { 00:12:04.636 "name": null, 00:12:04.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.636 "is_configured": false, 00:12:04.636 "data_offset": 0, 00:12:04.636 "data_size": 65536 00:12:04.636 }, 00:12:04.636 { 00:12:04.636 "name": "BaseBdev2", 00:12:04.636 "uuid": "dd8bf7c1-694a-4cbe-9422-698be14c2dba", 00:12:04.636 "is_configured": true, 00:12:04.636 "data_offset": 0, 00:12:04.636 "data_size": 65536 00:12:04.636 }, 00:12:04.636 { 00:12:04.636 "name": "BaseBdev3", 00:12:04.636 "uuid": "a5432b53-9c71-443c-a8d8-b8ee9082e9a1", 00:12:04.636 "is_configured": true, 00:12:04.636 "data_offset": 0, 00:12:04.636 "data_size": 65536 00:12:04.636 }, 00:12:04.636 { 00:12:04.636 "name": "BaseBdev4", 00:12:04.636 "uuid": "11d32a62-da4a-4dae-b2c9-f61c5a8e2228", 00:12:04.636 "is_configured": true, 00:12:04.636 "data_offset": 0, 00:12:04.636 "data_size": 65536 00:12:04.636 } 00:12:04.636 ] 00:12:04.636 }' 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.636 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.204 [2024-10-21 09:56:41.634817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.204 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.205 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.464 [2024-10-21 09:56:41.801662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.464 09:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.464 [2024-10-21 09:56:41.968755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:05.464 [2024-10-21 09:56:41.968961] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.723 [2024-10-21 09:56:42.073248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.723 [2024-10-21 09:56:42.073463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.723 [2024-10-21 09:56:42.073520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.723 BaseBdev2 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.723 [ 00:12:05.723 { 00:12:05.723 "name": "BaseBdev2", 00:12:05.723 "aliases": [ 00:12:05.723 "3184eb36-5374-47b5-bd30-5f1c63b9781d" 00:12:05.723 ], 00:12:05.723 "product_name": "Malloc disk", 00:12:05.723 "block_size": 512, 00:12:05.723 "num_blocks": 65536, 00:12:05.723 "uuid": "3184eb36-5374-47b5-bd30-5f1c63b9781d", 00:12:05.723 "assigned_rate_limits": { 00:12:05.723 "rw_ios_per_sec": 0, 00:12:05.723 "rw_mbytes_per_sec": 0, 00:12:05.723 "r_mbytes_per_sec": 0, 00:12:05.723 "w_mbytes_per_sec": 0 00:12:05.723 }, 00:12:05.723 "claimed": false, 00:12:05.723 "zoned": false, 00:12:05.723 "supported_io_types": { 00:12:05.723 "read": true, 00:12:05.723 "write": true, 00:12:05.723 "unmap": true, 00:12:05.723 "flush": true, 00:12:05.723 "reset": true, 00:12:05.723 "nvme_admin": false, 00:12:05.723 "nvme_io": false, 00:12:05.723 "nvme_io_md": false, 00:12:05.723 "write_zeroes": true, 00:12:05.723 "zcopy": true, 00:12:05.723 "get_zone_info": false, 00:12:05.723 "zone_management": false, 00:12:05.723 "zone_append": false, 00:12:05.723 "compare": false, 00:12:05.723 "compare_and_write": false, 00:12:05.723 "abort": true, 00:12:05.723 "seek_hole": false, 00:12:05.723 "seek_data": false, 00:12:05.723 "copy": true, 00:12:05.723 "nvme_iov_md": false 00:12:05.723 }, 00:12:05.723 "memory_domains": [ 00:12:05.723 { 00:12:05.723 "dma_device_id": "system", 00:12:05.723 "dma_device_type": 1 00:12:05.723 }, 00:12:05.723 { 00:12:05.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.723 "dma_device_type": 2 00:12:05.723 } 00:12:05.723 ], 00:12:05.723 "driver_specific": {} 00:12:05.723 } 00:12:05.723 ] 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.723 BaseBdev3 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.723 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.723 [ 00:12:05.723 { 00:12:05.723 "name": "BaseBdev3", 00:12:05.723 "aliases": [ 00:12:05.723 "910bc96c-9f80-4252-80cf-38102223111f" 00:12:05.723 ], 00:12:05.723 "product_name": "Malloc disk", 00:12:05.723 "block_size": 512, 00:12:05.723 "num_blocks": 65536, 00:12:05.723 "uuid": "910bc96c-9f80-4252-80cf-38102223111f", 00:12:05.723 "assigned_rate_limits": { 00:12:05.723 "rw_ios_per_sec": 0, 00:12:05.723 "rw_mbytes_per_sec": 0, 00:12:05.723 "r_mbytes_per_sec": 0, 00:12:05.723 "w_mbytes_per_sec": 0 00:12:05.723 }, 00:12:05.723 "claimed": false, 00:12:05.723 "zoned": false, 00:12:05.724 "supported_io_types": { 00:12:05.724 "read": true, 00:12:05.724 "write": true, 00:12:05.724 "unmap": true, 00:12:05.724 "flush": true, 00:12:05.724 "reset": true, 00:12:05.724 "nvme_admin": false, 00:12:05.724 "nvme_io": false, 00:12:05.724 "nvme_io_md": false, 00:12:05.724 "write_zeroes": true, 00:12:05.724 "zcopy": true, 00:12:05.724 "get_zone_info": false, 00:12:05.724 "zone_management": false, 00:12:05.724 "zone_append": false, 00:12:05.724 "compare": false, 00:12:05.724 "compare_and_write": false, 00:12:05.724 "abort": true, 00:12:05.724 "seek_hole": false, 00:12:05.724 "seek_data": false, 00:12:05.724 "copy": true, 00:12:05.724 "nvme_iov_md": false 00:12:05.724 }, 00:12:05.724 "memory_domains": [ 00:12:05.724 { 00:12:05.724 "dma_device_id": "system", 00:12:05.724 "dma_device_type": 1 00:12:05.724 }, 00:12:05.724 { 00:12:05.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.724 "dma_device_type": 2 00:12:05.724 } 00:12:05.724 ], 00:12:05.724 "driver_specific": {} 00:12:05.724 } 00:12:05.724 ] 00:12:05.724 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.724 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:05.724 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.724 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.724 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:05.724 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.724 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.983 BaseBdev4 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.983 [ 00:12:05.983 { 00:12:05.983 "name": "BaseBdev4", 00:12:05.983 "aliases": [ 00:12:05.983 "2ad598b1-7b21-4809-88aa-a0bc31889144" 00:12:05.983 ], 00:12:05.983 "product_name": "Malloc disk", 00:12:05.983 "block_size": 512, 00:12:05.983 "num_blocks": 65536, 00:12:05.983 "uuid": "2ad598b1-7b21-4809-88aa-a0bc31889144", 00:12:05.983 "assigned_rate_limits": { 00:12:05.983 "rw_ios_per_sec": 0, 00:12:05.983 "rw_mbytes_per_sec": 0, 00:12:05.983 "r_mbytes_per_sec": 0, 00:12:05.983 "w_mbytes_per_sec": 0 00:12:05.983 }, 00:12:05.983 "claimed": false, 00:12:05.983 "zoned": false, 00:12:05.983 "supported_io_types": { 00:12:05.983 "read": true, 00:12:05.983 "write": true, 00:12:05.983 "unmap": true, 00:12:05.983 "flush": true, 00:12:05.983 "reset": true, 00:12:05.983 "nvme_admin": false, 00:12:05.983 "nvme_io": false, 00:12:05.983 "nvme_io_md": false, 00:12:05.983 "write_zeroes": true, 00:12:05.983 "zcopy": true, 00:12:05.983 "get_zone_info": false, 00:12:05.983 "zone_management": false, 00:12:05.983 "zone_append": false, 00:12:05.983 "compare": false, 00:12:05.983 "compare_and_write": false, 00:12:05.983 "abort": true, 00:12:05.983 "seek_hole": false, 00:12:05.983 "seek_data": false, 00:12:05.983 "copy": true, 00:12:05.983 "nvme_iov_md": false 00:12:05.983 }, 00:12:05.983 "memory_domains": [ 00:12:05.983 { 00:12:05.983 "dma_device_id": "system", 00:12:05.983 "dma_device_type": 1 00:12:05.983 }, 00:12:05.983 { 00:12:05.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.983 "dma_device_type": 2 00:12:05.983 } 00:12:05.983 ], 00:12:05.983 "driver_specific": {} 00:12:05.983 } 00:12:05.983 ] 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.983 [2024-10-21 09:56:42.399367] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.983 [2024-10-21 09:56:42.399525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.983 [2024-10-21 09:56:42.399611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.983 [2024-10-21 09:56:42.402196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.983 [2024-10-21 09:56:42.402326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.983 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.984 "name": "Existed_Raid", 00:12:05.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.984 "strip_size_kb": 0, 00:12:05.984 "state": "configuring", 00:12:05.984 "raid_level": "raid1", 00:12:05.984 "superblock": false, 00:12:05.984 "num_base_bdevs": 4, 00:12:05.984 "num_base_bdevs_discovered": 3, 00:12:05.984 "num_base_bdevs_operational": 4, 00:12:05.984 "base_bdevs_list": [ 00:12:05.984 { 00:12:05.984 "name": "BaseBdev1", 00:12:05.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.984 "is_configured": false, 00:12:05.984 "data_offset": 0, 00:12:05.984 "data_size": 0 00:12:05.984 }, 00:12:05.984 { 00:12:05.984 "name": "BaseBdev2", 00:12:05.984 "uuid": "3184eb36-5374-47b5-bd30-5f1c63b9781d", 00:12:05.984 "is_configured": true, 00:12:05.984 "data_offset": 0, 00:12:05.984 "data_size": 65536 00:12:05.984 }, 00:12:05.984 { 00:12:05.984 "name": "BaseBdev3", 00:12:05.984 "uuid": "910bc96c-9f80-4252-80cf-38102223111f", 00:12:05.984 "is_configured": true, 00:12:05.984 "data_offset": 0, 00:12:05.984 "data_size": 65536 00:12:05.984 }, 00:12:05.984 { 00:12:05.984 "name": "BaseBdev4", 00:12:05.984 "uuid": "2ad598b1-7b21-4809-88aa-a0bc31889144", 00:12:05.984 "is_configured": true, 00:12:05.984 "data_offset": 0, 00:12:05.984 "data_size": 65536 00:12:05.984 } 00:12:05.984 ] 00:12:05.984 }' 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.984 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.551 [2024-10-21 09:56:42.886816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.551 "name": "Existed_Raid", 00:12:06.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.551 "strip_size_kb": 0, 00:12:06.551 "state": "configuring", 00:12:06.551 "raid_level": "raid1", 00:12:06.551 "superblock": false, 00:12:06.551 "num_base_bdevs": 4, 00:12:06.551 "num_base_bdevs_discovered": 2, 00:12:06.551 "num_base_bdevs_operational": 4, 00:12:06.551 "base_bdevs_list": [ 00:12:06.551 { 00:12:06.551 "name": "BaseBdev1", 00:12:06.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.551 "is_configured": false, 00:12:06.551 "data_offset": 0, 00:12:06.551 "data_size": 0 00:12:06.551 }, 00:12:06.551 { 00:12:06.551 "name": null, 00:12:06.551 "uuid": "3184eb36-5374-47b5-bd30-5f1c63b9781d", 00:12:06.551 "is_configured": false, 00:12:06.551 "data_offset": 0, 00:12:06.551 "data_size": 65536 00:12:06.551 }, 00:12:06.551 { 00:12:06.551 "name": "BaseBdev3", 00:12:06.551 "uuid": "910bc96c-9f80-4252-80cf-38102223111f", 00:12:06.551 "is_configured": true, 00:12:06.551 "data_offset": 0, 00:12:06.551 "data_size": 65536 00:12:06.551 }, 00:12:06.551 { 00:12:06.551 "name": "BaseBdev4", 00:12:06.551 "uuid": "2ad598b1-7b21-4809-88aa-a0bc31889144", 00:12:06.551 "is_configured": true, 00:12:06.551 "data_offset": 0, 00:12:06.551 "data_size": 65536 00:12:06.551 } 00:12:06.551 ] 00:12:06.551 }' 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.551 09:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.810 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.810 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:06.810 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.810 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.810 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.069 [2024-10-21 09:56:43.464885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.069 BaseBdev1 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.069 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.070 [ 00:12:07.070 { 00:12:07.070 "name": "BaseBdev1", 00:12:07.070 "aliases": [ 00:12:07.070 "7324a84b-52c1-4467-b04f-4fd0e0ee84dc" 00:12:07.070 ], 00:12:07.070 "product_name": "Malloc disk", 00:12:07.070 "block_size": 512, 00:12:07.070 "num_blocks": 65536, 00:12:07.070 "uuid": "7324a84b-52c1-4467-b04f-4fd0e0ee84dc", 00:12:07.070 "assigned_rate_limits": { 00:12:07.070 "rw_ios_per_sec": 0, 00:12:07.070 "rw_mbytes_per_sec": 0, 00:12:07.070 "r_mbytes_per_sec": 0, 00:12:07.070 "w_mbytes_per_sec": 0 00:12:07.070 }, 00:12:07.070 "claimed": true, 00:12:07.070 "claim_type": "exclusive_write", 00:12:07.070 "zoned": false, 00:12:07.070 "supported_io_types": { 00:12:07.070 "read": true, 00:12:07.070 "write": true, 00:12:07.070 "unmap": true, 00:12:07.070 "flush": true, 00:12:07.070 "reset": true, 00:12:07.070 "nvme_admin": false, 00:12:07.070 "nvme_io": false, 00:12:07.070 "nvme_io_md": false, 00:12:07.070 "write_zeroes": true, 00:12:07.070 "zcopy": true, 00:12:07.070 "get_zone_info": false, 00:12:07.070 "zone_management": false, 00:12:07.070 "zone_append": false, 00:12:07.070 "compare": false, 00:12:07.070 "compare_and_write": false, 00:12:07.070 "abort": true, 00:12:07.070 "seek_hole": false, 00:12:07.070 "seek_data": false, 00:12:07.070 "copy": true, 00:12:07.070 "nvme_iov_md": false 00:12:07.070 }, 00:12:07.070 "memory_domains": [ 00:12:07.070 { 00:12:07.070 "dma_device_id": "system", 00:12:07.070 "dma_device_type": 1 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.070 "dma_device_type": 2 00:12:07.070 } 00:12:07.070 ], 00:12:07.070 "driver_specific": {} 00:12:07.070 } 00:12:07.070 ] 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.070 "name": "Existed_Raid", 00:12:07.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.070 "strip_size_kb": 0, 00:12:07.070 "state": "configuring", 00:12:07.070 "raid_level": "raid1", 00:12:07.070 "superblock": false, 00:12:07.070 "num_base_bdevs": 4, 00:12:07.070 "num_base_bdevs_discovered": 3, 00:12:07.070 "num_base_bdevs_operational": 4, 00:12:07.070 "base_bdevs_list": [ 00:12:07.070 { 00:12:07.070 "name": "BaseBdev1", 00:12:07.070 "uuid": "7324a84b-52c1-4467-b04f-4fd0e0ee84dc", 00:12:07.070 "is_configured": true, 00:12:07.070 "data_offset": 0, 00:12:07.070 "data_size": 65536 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "name": null, 00:12:07.070 "uuid": "3184eb36-5374-47b5-bd30-5f1c63b9781d", 00:12:07.070 "is_configured": false, 00:12:07.070 "data_offset": 0, 00:12:07.070 "data_size": 65536 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "name": "BaseBdev3", 00:12:07.070 "uuid": "910bc96c-9f80-4252-80cf-38102223111f", 00:12:07.070 "is_configured": true, 00:12:07.070 "data_offset": 0, 00:12:07.070 "data_size": 65536 00:12:07.070 }, 00:12:07.070 { 00:12:07.070 "name": "BaseBdev4", 00:12:07.070 "uuid": "2ad598b1-7b21-4809-88aa-a0bc31889144", 00:12:07.070 "is_configured": true, 00:12:07.070 "data_offset": 0, 00:12:07.070 "data_size": 65536 00:12:07.070 } 00:12:07.070 ] 00:12:07.070 }' 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.070 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.638 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.638 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.638 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.638 09:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:07.638 09:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.638 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:07.638 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:07.638 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.638 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.638 [2024-10-21 09:56:44.012135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.638 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.639 "name": "Existed_Raid", 00:12:07.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.639 "strip_size_kb": 0, 00:12:07.639 "state": "configuring", 00:12:07.639 "raid_level": "raid1", 00:12:07.639 "superblock": false, 00:12:07.639 "num_base_bdevs": 4, 00:12:07.639 "num_base_bdevs_discovered": 2, 00:12:07.639 "num_base_bdevs_operational": 4, 00:12:07.639 "base_bdevs_list": [ 00:12:07.639 { 00:12:07.639 "name": "BaseBdev1", 00:12:07.639 "uuid": "7324a84b-52c1-4467-b04f-4fd0e0ee84dc", 00:12:07.639 "is_configured": true, 00:12:07.639 "data_offset": 0, 00:12:07.639 "data_size": 65536 00:12:07.639 }, 00:12:07.639 { 00:12:07.639 "name": null, 00:12:07.639 "uuid": "3184eb36-5374-47b5-bd30-5f1c63b9781d", 00:12:07.639 "is_configured": false, 00:12:07.639 "data_offset": 0, 00:12:07.639 "data_size": 65536 00:12:07.639 }, 00:12:07.639 { 00:12:07.639 "name": null, 00:12:07.639 "uuid": "910bc96c-9f80-4252-80cf-38102223111f", 00:12:07.639 "is_configured": false, 00:12:07.639 "data_offset": 0, 00:12:07.639 "data_size": 65536 00:12:07.639 }, 00:12:07.639 { 00:12:07.639 "name": "BaseBdev4", 00:12:07.639 "uuid": "2ad598b1-7b21-4809-88aa-a0bc31889144", 00:12:07.639 "is_configured": true, 00:12:07.639 "data_offset": 0, 00:12:07.639 "data_size": 65536 00:12:07.639 } 00:12:07.639 ] 00:12:07.639 }' 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.639 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.209 [2024-10-21 09:56:44.551418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.209 "name": "Existed_Raid", 00:12:08.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.209 "strip_size_kb": 0, 00:12:08.209 "state": "configuring", 00:12:08.209 "raid_level": "raid1", 00:12:08.209 "superblock": false, 00:12:08.209 "num_base_bdevs": 4, 00:12:08.209 "num_base_bdevs_discovered": 3, 00:12:08.209 "num_base_bdevs_operational": 4, 00:12:08.209 "base_bdevs_list": [ 00:12:08.209 { 00:12:08.209 "name": "BaseBdev1", 00:12:08.209 "uuid": "7324a84b-52c1-4467-b04f-4fd0e0ee84dc", 00:12:08.209 "is_configured": true, 00:12:08.209 "data_offset": 0, 00:12:08.209 "data_size": 65536 00:12:08.209 }, 00:12:08.209 { 00:12:08.209 "name": null, 00:12:08.209 "uuid": "3184eb36-5374-47b5-bd30-5f1c63b9781d", 00:12:08.209 "is_configured": false, 00:12:08.209 "data_offset": 0, 00:12:08.209 "data_size": 65536 00:12:08.209 }, 00:12:08.209 { 00:12:08.209 "name": "BaseBdev3", 00:12:08.209 "uuid": "910bc96c-9f80-4252-80cf-38102223111f", 00:12:08.209 "is_configured": true, 00:12:08.209 "data_offset": 0, 00:12:08.209 "data_size": 65536 00:12:08.209 }, 00:12:08.209 { 00:12:08.209 "name": "BaseBdev4", 00:12:08.209 "uuid": "2ad598b1-7b21-4809-88aa-a0bc31889144", 00:12:08.209 "is_configured": true, 00:12:08.209 "data_offset": 0, 00:12:08.209 "data_size": 65536 00:12:08.209 } 00:12:08.209 ] 00:12:08.209 }' 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.209 09:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.470 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.470 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.470 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.470 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.470 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.470 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:08.470 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.470 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.470 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.470 [2024-10-21 09:56:45.058868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.730 "name": "Existed_Raid", 00:12:08.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.730 "strip_size_kb": 0, 00:12:08.730 "state": "configuring", 00:12:08.730 "raid_level": "raid1", 00:12:08.730 "superblock": false, 00:12:08.730 "num_base_bdevs": 4, 00:12:08.730 "num_base_bdevs_discovered": 2, 00:12:08.730 "num_base_bdevs_operational": 4, 00:12:08.730 "base_bdevs_list": [ 00:12:08.730 { 00:12:08.730 "name": null, 00:12:08.730 "uuid": "7324a84b-52c1-4467-b04f-4fd0e0ee84dc", 00:12:08.730 "is_configured": false, 00:12:08.730 "data_offset": 0, 00:12:08.730 "data_size": 65536 00:12:08.730 }, 00:12:08.730 { 00:12:08.730 "name": null, 00:12:08.730 "uuid": "3184eb36-5374-47b5-bd30-5f1c63b9781d", 00:12:08.730 "is_configured": false, 00:12:08.730 "data_offset": 0, 00:12:08.730 "data_size": 65536 00:12:08.730 }, 00:12:08.730 { 00:12:08.730 "name": "BaseBdev3", 00:12:08.730 "uuid": "910bc96c-9f80-4252-80cf-38102223111f", 00:12:08.730 "is_configured": true, 00:12:08.730 "data_offset": 0, 00:12:08.730 "data_size": 65536 00:12:08.730 }, 00:12:08.730 { 00:12:08.730 "name": "BaseBdev4", 00:12:08.730 "uuid": "2ad598b1-7b21-4809-88aa-a0bc31889144", 00:12:08.730 "is_configured": true, 00:12:08.730 "data_offset": 0, 00:12:08.730 "data_size": 65536 00:12:08.730 } 00:12:08.730 ] 00:12:08.730 }' 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.730 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.300 [2024-10-21 09:56:45.712473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.300 "name": "Existed_Raid", 00:12:09.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.300 "strip_size_kb": 0, 00:12:09.300 "state": "configuring", 00:12:09.300 "raid_level": "raid1", 00:12:09.300 "superblock": false, 00:12:09.300 "num_base_bdevs": 4, 00:12:09.300 "num_base_bdevs_discovered": 3, 00:12:09.300 "num_base_bdevs_operational": 4, 00:12:09.300 "base_bdevs_list": [ 00:12:09.300 { 00:12:09.300 "name": null, 00:12:09.300 "uuid": "7324a84b-52c1-4467-b04f-4fd0e0ee84dc", 00:12:09.300 "is_configured": false, 00:12:09.300 "data_offset": 0, 00:12:09.300 "data_size": 65536 00:12:09.300 }, 00:12:09.300 { 00:12:09.300 "name": "BaseBdev2", 00:12:09.300 "uuid": "3184eb36-5374-47b5-bd30-5f1c63b9781d", 00:12:09.300 "is_configured": true, 00:12:09.300 "data_offset": 0, 00:12:09.300 "data_size": 65536 00:12:09.300 }, 00:12:09.300 { 00:12:09.300 "name": "BaseBdev3", 00:12:09.300 "uuid": "910bc96c-9f80-4252-80cf-38102223111f", 00:12:09.300 "is_configured": true, 00:12:09.300 "data_offset": 0, 00:12:09.300 "data_size": 65536 00:12:09.300 }, 00:12:09.300 { 00:12:09.300 "name": "BaseBdev4", 00:12:09.300 "uuid": "2ad598b1-7b21-4809-88aa-a0bc31889144", 00:12:09.300 "is_configured": true, 00:12:09.300 "data_offset": 0, 00:12:09.300 "data_size": 65536 00:12:09.300 } 00:12:09.300 ] 00:12:09.300 }' 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.300 09:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7324a84b-52c1-4467-b04f-4fd0e0ee84dc 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.871 [2024-10-21 09:56:46.320331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:09.871 [2024-10-21 09:56:46.320527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:09.871 [2024-10-21 09:56:46.320593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:09.871 [2024-10-21 09:56:46.321014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:09.871 [2024-10-21 09:56:46.321316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:09.871 [2024-10-21 09:56:46.321375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:12:09.871 [2024-10-21 09:56:46.321831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.871 NewBaseBdev 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.871 [ 00:12:09.871 { 00:12:09.871 "name": "NewBaseBdev", 00:12:09.871 "aliases": [ 00:12:09.871 "7324a84b-52c1-4467-b04f-4fd0e0ee84dc" 00:12:09.871 ], 00:12:09.871 "product_name": "Malloc disk", 00:12:09.871 "block_size": 512, 00:12:09.871 "num_blocks": 65536, 00:12:09.871 "uuid": "7324a84b-52c1-4467-b04f-4fd0e0ee84dc", 00:12:09.871 "assigned_rate_limits": { 00:12:09.871 "rw_ios_per_sec": 0, 00:12:09.871 "rw_mbytes_per_sec": 0, 00:12:09.871 "r_mbytes_per_sec": 0, 00:12:09.871 "w_mbytes_per_sec": 0 00:12:09.871 }, 00:12:09.871 "claimed": true, 00:12:09.871 "claim_type": "exclusive_write", 00:12:09.871 "zoned": false, 00:12:09.871 "supported_io_types": { 00:12:09.871 "read": true, 00:12:09.871 "write": true, 00:12:09.871 "unmap": true, 00:12:09.871 "flush": true, 00:12:09.871 "reset": true, 00:12:09.871 "nvme_admin": false, 00:12:09.871 "nvme_io": false, 00:12:09.871 "nvme_io_md": false, 00:12:09.871 "write_zeroes": true, 00:12:09.871 "zcopy": true, 00:12:09.871 "get_zone_info": false, 00:12:09.871 "zone_management": false, 00:12:09.871 "zone_append": false, 00:12:09.871 "compare": false, 00:12:09.871 "compare_and_write": false, 00:12:09.871 "abort": true, 00:12:09.871 "seek_hole": false, 00:12:09.871 "seek_data": false, 00:12:09.871 "copy": true, 00:12:09.871 "nvme_iov_md": false 00:12:09.871 }, 00:12:09.871 "memory_domains": [ 00:12:09.871 { 00:12:09.871 "dma_device_id": "system", 00:12:09.871 "dma_device_type": 1 00:12:09.871 }, 00:12:09.871 { 00:12:09.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.871 "dma_device_type": 2 00:12:09.871 } 00:12:09.871 ], 00:12:09.871 "driver_specific": {} 00:12:09.871 } 00:12:09.871 ] 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.871 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.872 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.872 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.872 "name": "Existed_Raid", 00:12:09.872 "uuid": "a18e39e0-14d3-4aff-b62f-c7e18450b44a", 00:12:09.872 "strip_size_kb": 0, 00:12:09.872 "state": "online", 00:12:09.872 "raid_level": "raid1", 00:12:09.872 "superblock": false, 00:12:09.872 "num_base_bdevs": 4, 00:12:09.872 "num_base_bdevs_discovered": 4, 00:12:09.872 "num_base_bdevs_operational": 4, 00:12:09.872 "base_bdevs_list": [ 00:12:09.872 { 00:12:09.872 "name": "NewBaseBdev", 00:12:09.872 "uuid": "7324a84b-52c1-4467-b04f-4fd0e0ee84dc", 00:12:09.872 "is_configured": true, 00:12:09.872 "data_offset": 0, 00:12:09.872 "data_size": 65536 00:12:09.872 }, 00:12:09.872 { 00:12:09.872 "name": "BaseBdev2", 00:12:09.872 "uuid": "3184eb36-5374-47b5-bd30-5f1c63b9781d", 00:12:09.872 "is_configured": true, 00:12:09.872 "data_offset": 0, 00:12:09.872 "data_size": 65536 00:12:09.872 }, 00:12:09.872 { 00:12:09.872 "name": "BaseBdev3", 00:12:09.872 "uuid": "910bc96c-9f80-4252-80cf-38102223111f", 00:12:09.872 "is_configured": true, 00:12:09.872 "data_offset": 0, 00:12:09.872 "data_size": 65536 00:12:09.872 }, 00:12:09.872 { 00:12:09.872 "name": "BaseBdev4", 00:12:09.872 "uuid": "2ad598b1-7b21-4809-88aa-a0bc31889144", 00:12:09.872 "is_configured": true, 00:12:09.872 "data_offset": 0, 00:12:09.872 "data_size": 65536 00:12:09.872 } 00:12:09.872 ] 00:12:09.872 }' 00:12:09.872 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.872 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.441 [2024-10-21 09:56:46.788124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:10.441 "name": "Existed_Raid", 00:12:10.441 "aliases": [ 00:12:10.441 "a18e39e0-14d3-4aff-b62f-c7e18450b44a" 00:12:10.441 ], 00:12:10.441 "product_name": "Raid Volume", 00:12:10.441 "block_size": 512, 00:12:10.441 "num_blocks": 65536, 00:12:10.441 "uuid": "a18e39e0-14d3-4aff-b62f-c7e18450b44a", 00:12:10.441 "assigned_rate_limits": { 00:12:10.441 "rw_ios_per_sec": 0, 00:12:10.441 "rw_mbytes_per_sec": 0, 00:12:10.441 "r_mbytes_per_sec": 0, 00:12:10.441 "w_mbytes_per_sec": 0 00:12:10.441 }, 00:12:10.441 "claimed": false, 00:12:10.441 "zoned": false, 00:12:10.441 "supported_io_types": { 00:12:10.441 "read": true, 00:12:10.441 "write": true, 00:12:10.441 "unmap": false, 00:12:10.441 "flush": false, 00:12:10.441 "reset": true, 00:12:10.441 "nvme_admin": false, 00:12:10.441 "nvme_io": false, 00:12:10.441 "nvme_io_md": false, 00:12:10.441 "write_zeroes": true, 00:12:10.441 "zcopy": false, 00:12:10.441 "get_zone_info": false, 00:12:10.441 "zone_management": false, 00:12:10.441 "zone_append": false, 00:12:10.441 "compare": false, 00:12:10.441 "compare_and_write": false, 00:12:10.441 "abort": false, 00:12:10.441 "seek_hole": false, 00:12:10.441 "seek_data": false, 00:12:10.441 "copy": false, 00:12:10.441 "nvme_iov_md": false 00:12:10.441 }, 00:12:10.441 "memory_domains": [ 00:12:10.441 { 00:12:10.441 "dma_device_id": "system", 00:12:10.441 "dma_device_type": 1 00:12:10.441 }, 00:12:10.441 { 00:12:10.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.441 "dma_device_type": 2 00:12:10.441 }, 00:12:10.441 { 00:12:10.441 "dma_device_id": "system", 00:12:10.441 "dma_device_type": 1 00:12:10.441 }, 00:12:10.441 { 00:12:10.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.441 "dma_device_type": 2 00:12:10.441 }, 00:12:10.441 { 00:12:10.441 "dma_device_id": "system", 00:12:10.441 "dma_device_type": 1 00:12:10.441 }, 00:12:10.441 { 00:12:10.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.441 "dma_device_type": 2 00:12:10.441 }, 00:12:10.441 { 00:12:10.441 "dma_device_id": "system", 00:12:10.441 "dma_device_type": 1 00:12:10.441 }, 00:12:10.441 { 00:12:10.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.441 "dma_device_type": 2 00:12:10.441 } 00:12:10.441 ], 00:12:10.441 "driver_specific": { 00:12:10.441 "raid": { 00:12:10.441 "uuid": "a18e39e0-14d3-4aff-b62f-c7e18450b44a", 00:12:10.441 "strip_size_kb": 0, 00:12:10.441 "state": "online", 00:12:10.441 "raid_level": "raid1", 00:12:10.441 "superblock": false, 00:12:10.441 "num_base_bdevs": 4, 00:12:10.441 "num_base_bdevs_discovered": 4, 00:12:10.441 "num_base_bdevs_operational": 4, 00:12:10.441 "base_bdevs_list": [ 00:12:10.441 { 00:12:10.441 "name": "NewBaseBdev", 00:12:10.441 "uuid": "7324a84b-52c1-4467-b04f-4fd0e0ee84dc", 00:12:10.441 "is_configured": true, 00:12:10.441 "data_offset": 0, 00:12:10.441 "data_size": 65536 00:12:10.441 }, 00:12:10.441 { 00:12:10.441 "name": "BaseBdev2", 00:12:10.441 "uuid": "3184eb36-5374-47b5-bd30-5f1c63b9781d", 00:12:10.441 "is_configured": true, 00:12:10.441 "data_offset": 0, 00:12:10.441 "data_size": 65536 00:12:10.441 }, 00:12:10.441 { 00:12:10.441 "name": "BaseBdev3", 00:12:10.441 "uuid": "910bc96c-9f80-4252-80cf-38102223111f", 00:12:10.441 "is_configured": true, 00:12:10.441 "data_offset": 0, 00:12:10.441 "data_size": 65536 00:12:10.441 }, 00:12:10.441 { 00:12:10.441 "name": "BaseBdev4", 00:12:10.441 "uuid": "2ad598b1-7b21-4809-88aa-a0bc31889144", 00:12:10.441 "is_configured": true, 00:12:10.441 "data_offset": 0, 00:12:10.441 "data_size": 65536 00:12:10.441 } 00:12:10.441 ] 00:12:10.441 } 00:12:10.441 } 00:12:10.441 }' 00:12:10.441 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:10.442 BaseBdev2 00:12:10.442 BaseBdev3 00:12:10.442 BaseBdev4' 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.442 09:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.442 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.442 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.442 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.442 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.702 [2024-10-21 09:56:47.151075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.702 [2024-10-21 09:56:47.151205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.702 [2024-10-21 09:56:47.151368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.702 [2024-10-21 09:56:47.151802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.702 [2024-10-21 09:56:47.151888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72768 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72768 ']' 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72768 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72768 00:12:10.702 killing process with pid 72768 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72768' 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72768 00:12:10.702 [2024-10-21 09:56:47.203432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.702 09:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72768 00:12:11.271 [2024-10-21 09:56:47.734193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:12.652 00:12:12.652 real 0m12.476s 00:12:12.652 user 0m19.266s 00:12:12.652 sys 0m2.594s 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.652 ************************************ 00:12:12.652 END TEST raid_state_function_test 00:12:12.652 ************************************ 00:12:12.652 09:56:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:12.652 09:56:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:12.652 09:56:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.652 09:56:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.652 ************************************ 00:12:12.652 START TEST raid_state_function_test_sb 00:12:12.652 ************************************ 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:12.652 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73450 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73450' 00:12:12.653 Process raid pid: 73450 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73450 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73450 ']' 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.653 09:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.653 [2024-10-21 09:56:49.185659] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:12:12.653 [2024-10-21 09:56:49.185890] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.913 [2024-10-21 09:56:49.355332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.173 [2024-10-21 09:56:49.508814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.433 [2024-10-21 09:56:49.819830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.433 [2024-10-21 09:56:49.819985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.692 [2024-10-21 09:56:50.077186] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.692 [2024-10-21 09:56:50.077255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.692 [2024-10-21 09:56:50.077268] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.692 [2024-10-21 09:56:50.077280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.692 [2024-10-21 09:56:50.077289] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.692 [2024-10-21 09:56:50.077300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.692 [2024-10-21 09:56:50.077308] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.692 [2024-10-21 09:56:50.077319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.692 "name": "Existed_Raid", 00:12:13.692 "uuid": "83c7dd5e-c02e-48c5-a916-2ea7377a5fd4", 00:12:13.692 "strip_size_kb": 0, 00:12:13.692 "state": "configuring", 00:12:13.692 "raid_level": "raid1", 00:12:13.692 "superblock": true, 00:12:13.692 "num_base_bdevs": 4, 00:12:13.692 "num_base_bdevs_discovered": 0, 00:12:13.692 "num_base_bdevs_operational": 4, 00:12:13.692 "base_bdevs_list": [ 00:12:13.692 { 00:12:13.692 "name": "BaseBdev1", 00:12:13.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.692 "is_configured": false, 00:12:13.692 "data_offset": 0, 00:12:13.692 "data_size": 0 00:12:13.692 }, 00:12:13.692 { 00:12:13.692 "name": "BaseBdev2", 00:12:13.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.692 "is_configured": false, 00:12:13.692 "data_offset": 0, 00:12:13.692 "data_size": 0 00:12:13.692 }, 00:12:13.692 { 00:12:13.692 "name": "BaseBdev3", 00:12:13.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.692 "is_configured": false, 00:12:13.692 "data_offset": 0, 00:12:13.692 "data_size": 0 00:12:13.692 }, 00:12:13.692 { 00:12:13.692 "name": "BaseBdev4", 00:12:13.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.692 "is_configured": false, 00:12:13.692 "data_offset": 0, 00:12:13.692 "data_size": 0 00:12:13.692 } 00:12:13.692 ] 00:12:13.692 }' 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.692 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.951 [2024-10-21 09:56:50.504410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.951 [2024-10-21 09:56:50.504519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.951 [2024-10-21 09:56:50.516381] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.951 [2024-10-21 09:56:50.516472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.951 [2024-10-21 09:56:50.516505] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.951 [2024-10-21 09:56:50.516534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.951 [2024-10-21 09:56:50.516557] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.951 [2024-10-21 09:56:50.516601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.951 [2024-10-21 09:56:50.516627] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.951 [2024-10-21 09:56:50.516698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.951 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.212 [2024-10-21 09:56:50.583826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.212 BaseBdev1 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.212 [ 00:12:14.212 { 00:12:14.212 "name": "BaseBdev1", 00:12:14.212 "aliases": [ 00:12:14.212 "cce631d8-a5ef-4f97-8159-25cc7cae9630" 00:12:14.212 ], 00:12:14.212 "product_name": "Malloc disk", 00:12:14.212 "block_size": 512, 00:12:14.212 "num_blocks": 65536, 00:12:14.212 "uuid": "cce631d8-a5ef-4f97-8159-25cc7cae9630", 00:12:14.212 "assigned_rate_limits": { 00:12:14.212 "rw_ios_per_sec": 0, 00:12:14.212 "rw_mbytes_per_sec": 0, 00:12:14.212 "r_mbytes_per_sec": 0, 00:12:14.212 "w_mbytes_per_sec": 0 00:12:14.212 }, 00:12:14.212 "claimed": true, 00:12:14.212 "claim_type": "exclusive_write", 00:12:14.212 "zoned": false, 00:12:14.212 "supported_io_types": { 00:12:14.212 "read": true, 00:12:14.212 "write": true, 00:12:14.212 "unmap": true, 00:12:14.212 "flush": true, 00:12:14.212 "reset": true, 00:12:14.212 "nvme_admin": false, 00:12:14.212 "nvme_io": false, 00:12:14.212 "nvme_io_md": false, 00:12:14.212 "write_zeroes": true, 00:12:14.212 "zcopy": true, 00:12:14.212 "get_zone_info": false, 00:12:14.212 "zone_management": false, 00:12:14.212 "zone_append": false, 00:12:14.212 "compare": false, 00:12:14.212 "compare_and_write": false, 00:12:14.212 "abort": true, 00:12:14.212 "seek_hole": false, 00:12:14.212 "seek_data": false, 00:12:14.212 "copy": true, 00:12:14.212 "nvme_iov_md": false 00:12:14.212 }, 00:12:14.212 "memory_domains": [ 00:12:14.212 { 00:12:14.212 "dma_device_id": "system", 00:12:14.212 "dma_device_type": 1 00:12:14.212 }, 00:12:14.212 { 00:12:14.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.212 "dma_device_type": 2 00:12:14.212 } 00:12:14.212 ], 00:12:14.212 "driver_specific": {} 00:12:14.212 } 00:12:14.212 ] 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.212 "name": "Existed_Raid", 00:12:14.212 "uuid": "0b7100d6-6359-4f97-82a9-93893a91972a", 00:12:14.212 "strip_size_kb": 0, 00:12:14.212 "state": "configuring", 00:12:14.212 "raid_level": "raid1", 00:12:14.212 "superblock": true, 00:12:14.212 "num_base_bdevs": 4, 00:12:14.212 "num_base_bdevs_discovered": 1, 00:12:14.212 "num_base_bdevs_operational": 4, 00:12:14.212 "base_bdevs_list": [ 00:12:14.212 { 00:12:14.212 "name": "BaseBdev1", 00:12:14.212 "uuid": "cce631d8-a5ef-4f97-8159-25cc7cae9630", 00:12:14.212 "is_configured": true, 00:12:14.212 "data_offset": 2048, 00:12:14.212 "data_size": 63488 00:12:14.212 }, 00:12:14.212 { 00:12:14.212 "name": "BaseBdev2", 00:12:14.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.212 "is_configured": false, 00:12:14.212 "data_offset": 0, 00:12:14.212 "data_size": 0 00:12:14.212 }, 00:12:14.212 { 00:12:14.212 "name": "BaseBdev3", 00:12:14.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.212 "is_configured": false, 00:12:14.212 "data_offset": 0, 00:12:14.212 "data_size": 0 00:12:14.212 }, 00:12:14.212 { 00:12:14.212 "name": "BaseBdev4", 00:12:14.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.212 "is_configured": false, 00:12:14.212 "data_offset": 0, 00:12:14.212 "data_size": 0 00:12:14.212 } 00:12:14.212 ] 00:12:14.212 }' 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.212 09:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.479 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.479 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.479 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.738 [2024-10-21 09:56:51.079125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.738 [2024-10-21 09:56:51.079249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:12:14.738 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.738 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:14.738 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.738 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.738 [2024-10-21 09:56:51.091173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.738 [2024-10-21 09:56:51.093656] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.738 [2024-10-21 09:56:51.093702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.738 [2024-10-21 09:56:51.093713] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.738 [2024-10-21 09:56:51.093726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.738 [2024-10-21 09:56:51.093734] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:14.738 [2024-10-21 09:56:51.093745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:14.738 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.738 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:14.738 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.738 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.738 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.739 "name": "Existed_Raid", 00:12:14.739 "uuid": "305703a4-8bbd-4123-a1db-4521eb3c4faa", 00:12:14.739 "strip_size_kb": 0, 00:12:14.739 "state": "configuring", 00:12:14.739 "raid_level": "raid1", 00:12:14.739 "superblock": true, 00:12:14.739 "num_base_bdevs": 4, 00:12:14.739 "num_base_bdevs_discovered": 1, 00:12:14.739 "num_base_bdevs_operational": 4, 00:12:14.739 "base_bdevs_list": [ 00:12:14.739 { 00:12:14.739 "name": "BaseBdev1", 00:12:14.739 "uuid": "cce631d8-a5ef-4f97-8159-25cc7cae9630", 00:12:14.739 "is_configured": true, 00:12:14.739 "data_offset": 2048, 00:12:14.739 "data_size": 63488 00:12:14.739 }, 00:12:14.739 { 00:12:14.739 "name": "BaseBdev2", 00:12:14.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.739 "is_configured": false, 00:12:14.739 "data_offset": 0, 00:12:14.739 "data_size": 0 00:12:14.739 }, 00:12:14.739 { 00:12:14.739 "name": "BaseBdev3", 00:12:14.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.739 "is_configured": false, 00:12:14.739 "data_offset": 0, 00:12:14.739 "data_size": 0 00:12:14.739 }, 00:12:14.739 { 00:12:14.739 "name": "BaseBdev4", 00:12:14.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.739 "is_configured": false, 00:12:14.739 "data_offset": 0, 00:12:14.739 "data_size": 0 00:12:14.739 } 00:12:14.739 ] 00:12:14.739 }' 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.739 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.308 [2024-10-21 09:56:51.657554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.308 BaseBdev2 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.308 [ 00:12:15.308 { 00:12:15.308 "name": "BaseBdev2", 00:12:15.308 "aliases": [ 00:12:15.308 "92bc8753-917b-4821-b92d-783bcbd9abac" 00:12:15.308 ], 00:12:15.308 "product_name": "Malloc disk", 00:12:15.308 "block_size": 512, 00:12:15.308 "num_blocks": 65536, 00:12:15.308 "uuid": "92bc8753-917b-4821-b92d-783bcbd9abac", 00:12:15.308 "assigned_rate_limits": { 00:12:15.308 "rw_ios_per_sec": 0, 00:12:15.308 "rw_mbytes_per_sec": 0, 00:12:15.308 "r_mbytes_per_sec": 0, 00:12:15.308 "w_mbytes_per_sec": 0 00:12:15.308 }, 00:12:15.308 "claimed": true, 00:12:15.308 "claim_type": "exclusive_write", 00:12:15.308 "zoned": false, 00:12:15.308 "supported_io_types": { 00:12:15.308 "read": true, 00:12:15.308 "write": true, 00:12:15.308 "unmap": true, 00:12:15.308 "flush": true, 00:12:15.308 "reset": true, 00:12:15.308 "nvme_admin": false, 00:12:15.308 "nvme_io": false, 00:12:15.308 "nvme_io_md": false, 00:12:15.308 "write_zeroes": true, 00:12:15.308 "zcopy": true, 00:12:15.308 "get_zone_info": false, 00:12:15.308 "zone_management": false, 00:12:15.308 "zone_append": false, 00:12:15.308 "compare": false, 00:12:15.308 "compare_and_write": false, 00:12:15.308 "abort": true, 00:12:15.308 "seek_hole": false, 00:12:15.308 "seek_data": false, 00:12:15.308 "copy": true, 00:12:15.308 "nvme_iov_md": false 00:12:15.308 }, 00:12:15.308 "memory_domains": [ 00:12:15.308 { 00:12:15.308 "dma_device_id": "system", 00:12:15.308 "dma_device_type": 1 00:12:15.308 }, 00:12:15.308 { 00:12:15.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.308 "dma_device_type": 2 00:12:15.308 } 00:12:15.308 ], 00:12:15.308 "driver_specific": {} 00:12:15.308 } 00:12:15.308 ] 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.308 "name": "Existed_Raid", 00:12:15.308 "uuid": "305703a4-8bbd-4123-a1db-4521eb3c4faa", 00:12:15.308 "strip_size_kb": 0, 00:12:15.308 "state": "configuring", 00:12:15.308 "raid_level": "raid1", 00:12:15.308 "superblock": true, 00:12:15.308 "num_base_bdevs": 4, 00:12:15.308 "num_base_bdevs_discovered": 2, 00:12:15.308 "num_base_bdevs_operational": 4, 00:12:15.308 "base_bdevs_list": [ 00:12:15.308 { 00:12:15.308 "name": "BaseBdev1", 00:12:15.308 "uuid": "cce631d8-a5ef-4f97-8159-25cc7cae9630", 00:12:15.308 "is_configured": true, 00:12:15.308 "data_offset": 2048, 00:12:15.308 "data_size": 63488 00:12:15.308 }, 00:12:15.308 { 00:12:15.308 "name": "BaseBdev2", 00:12:15.308 "uuid": "92bc8753-917b-4821-b92d-783bcbd9abac", 00:12:15.308 "is_configured": true, 00:12:15.308 "data_offset": 2048, 00:12:15.308 "data_size": 63488 00:12:15.308 }, 00:12:15.308 { 00:12:15.308 "name": "BaseBdev3", 00:12:15.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.308 "is_configured": false, 00:12:15.308 "data_offset": 0, 00:12:15.308 "data_size": 0 00:12:15.308 }, 00:12:15.308 { 00:12:15.308 "name": "BaseBdev4", 00:12:15.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.308 "is_configured": false, 00:12:15.308 "data_offset": 0, 00:12:15.308 "data_size": 0 00:12:15.308 } 00:12:15.308 ] 00:12:15.308 }' 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.308 09:56:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.878 [2024-10-21 09:56:52.252814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.878 BaseBdev3 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.878 [ 00:12:15.878 { 00:12:15.878 "name": "BaseBdev3", 00:12:15.878 "aliases": [ 00:12:15.878 "856a1406-7800-478d-a9f1-f3cf98d46fb8" 00:12:15.878 ], 00:12:15.878 "product_name": "Malloc disk", 00:12:15.878 "block_size": 512, 00:12:15.878 "num_blocks": 65536, 00:12:15.878 "uuid": "856a1406-7800-478d-a9f1-f3cf98d46fb8", 00:12:15.878 "assigned_rate_limits": { 00:12:15.878 "rw_ios_per_sec": 0, 00:12:15.878 "rw_mbytes_per_sec": 0, 00:12:15.878 "r_mbytes_per_sec": 0, 00:12:15.878 "w_mbytes_per_sec": 0 00:12:15.878 }, 00:12:15.878 "claimed": true, 00:12:15.878 "claim_type": "exclusive_write", 00:12:15.878 "zoned": false, 00:12:15.878 "supported_io_types": { 00:12:15.878 "read": true, 00:12:15.878 "write": true, 00:12:15.878 "unmap": true, 00:12:15.878 "flush": true, 00:12:15.878 "reset": true, 00:12:15.878 "nvme_admin": false, 00:12:15.878 "nvme_io": false, 00:12:15.878 "nvme_io_md": false, 00:12:15.878 "write_zeroes": true, 00:12:15.878 "zcopy": true, 00:12:15.878 "get_zone_info": false, 00:12:15.878 "zone_management": false, 00:12:15.878 "zone_append": false, 00:12:15.878 "compare": false, 00:12:15.878 "compare_and_write": false, 00:12:15.878 "abort": true, 00:12:15.878 "seek_hole": false, 00:12:15.878 "seek_data": false, 00:12:15.878 "copy": true, 00:12:15.878 "nvme_iov_md": false 00:12:15.878 }, 00:12:15.878 "memory_domains": [ 00:12:15.878 { 00:12:15.878 "dma_device_id": "system", 00:12:15.878 "dma_device_type": 1 00:12:15.878 }, 00:12:15.878 { 00:12:15.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.878 "dma_device_type": 2 00:12:15.878 } 00:12:15.878 ], 00:12:15.878 "driver_specific": {} 00:12:15.878 } 00:12:15.878 ] 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.878 "name": "Existed_Raid", 00:12:15.878 "uuid": "305703a4-8bbd-4123-a1db-4521eb3c4faa", 00:12:15.878 "strip_size_kb": 0, 00:12:15.878 "state": "configuring", 00:12:15.878 "raid_level": "raid1", 00:12:15.878 "superblock": true, 00:12:15.878 "num_base_bdevs": 4, 00:12:15.878 "num_base_bdevs_discovered": 3, 00:12:15.878 "num_base_bdevs_operational": 4, 00:12:15.878 "base_bdevs_list": [ 00:12:15.878 { 00:12:15.878 "name": "BaseBdev1", 00:12:15.878 "uuid": "cce631d8-a5ef-4f97-8159-25cc7cae9630", 00:12:15.878 "is_configured": true, 00:12:15.878 "data_offset": 2048, 00:12:15.878 "data_size": 63488 00:12:15.878 }, 00:12:15.878 { 00:12:15.878 "name": "BaseBdev2", 00:12:15.878 "uuid": "92bc8753-917b-4821-b92d-783bcbd9abac", 00:12:15.878 "is_configured": true, 00:12:15.878 "data_offset": 2048, 00:12:15.878 "data_size": 63488 00:12:15.878 }, 00:12:15.878 { 00:12:15.878 "name": "BaseBdev3", 00:12:15.878 "uuid": "856a1406-7800-478d-a9f1-f3cf98d46fb8", 00:12:15.878 "is_configured": true, 00:12:15.878 "data_offset": 2048, 00:12:15.878 "data_size": 63488 00:12:15.878 }, 00:12:15.878 { 00:12:15.878 "name": "BaseBdev4", 00:12:15.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.878 "is_configured": false, 00:12:15.878 "data_offset": 0, 00:12:15.878 "data_size": 0 00:12:15.878 } 00:12:15.878 ] 00:12:15.878 }' 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.878 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.447 [2024-10-21 09:56:52.841656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:16.447 [2024-10-21 09:56:52.842125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:16.447 [2024-10-21 09:56:52.842189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.447 [2024-10-21 09:56:52.842596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:16.447 BaseBdev4 00:12:16.447 [2024-10-21 09:56:52.842847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:16.447 [2024-10-21 09:56:52.842872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:12:16.447 [2024-10-21 09:56:52.843063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.447 [ 00:12:16.447 { 00:12:16.447 "name": "BaseBdev4", 00:12:16.447 "aliases": [ 00:12:16.447 "745c4e97-483a-4026-a141-0e46714193f4" 00:12:16.447 ], 00:12:16.447 "product_name": "Malloc disk", 00:12:16.447 "block_size": 512, 00:12:16.447 "num_blocks": 65536, 00:12:16.447 "uuid": "745c4e97-483a-4026-a141-0e46714193f4", 00:12:16.447 "assigned_rate_limits": { 00:12:16.447 "rw_ios_per_sec": 0, 00:12:16.447 "rw_mbytes_per_sec": 0, 00:12:16.447 "r_mbytes_per_sec": 0, 00:12:16.447 "w_mbytes_per_sec": 0 00:12:16.447 }, 00:12:16.447 "claimed": true, 00:12:16.447 "claim_type": "exclusive_write", 00:12:16.447 "zoned": false, 00:12:16.447 "supported_io_types": { 00:12:16.447 "read": true, 00:12:16.447 "write": true, 00:12:16.447 "unmap": true, 00:12:16.447 "flush": true, 00:12:16.447 "reset": true, 00:12:16.447 "nvme_admin": false, 00:12:16.447 "nvme_io": false, 00:12:16.447 "nvme_io_md": false, 00:12:16.447 "write_zeroes": true, 00:12:16.447 "zcopy": true, 00:12:16.447 "get_zone_info": false, 00:12:16.447 "zone_management": false, 00:12:16.447 "zone_append": false, 00:12:16.447 "compare": false, 00:12:16.447 "compare_and_write": false, 00:12:16.447 "abort": true, 00:12:16.447 "seek_hole": false, 00:12:16.447 "seek_data": false, 00:12:16.447 "copy": true, 00:12:16.447 "nvme_iov_md": false 00:12:16.447 }, 00:12:16.447 "memory_domains": [ 00:12:16.447 { 00:12:16.447 "dma_device_id": "system", 00:12:16.447 "dma_device_type": 1 00:12:16.447 }, 00:12:16.447 { 00:12:16.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.447 "dma_device_type": 2 00:12:16.447 } 00:12:16.447 ], 00:12:16.447 "driver_specific": {} 00:12:16.447 } 00:12:16.447 ] 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.447 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.447 "name": "Existed_Raid", 00:12:16.447 "uuid": "305703a4-8bbd-4123-a1db-4521eb3c4faa", 00:12:16.447 "strip_size_kb": 0, 00:12:16.447 "state": "online", 00:12:16.447 "raid_level": "raid1", 00:12:16.447 "superblock": true, 00:12:16.447 "num_base_bdevs": 4, 00:12:16.447 "num_base_bdevs_discovered": 4, 00:12:16.447 "num_base_bdevs_operational": 4, 00:12:16.447 "base_bdevs_list": [ 00:12:16.447 { 00:12:16.447 "name": "BaseBdev1", 00:12:16.447 "uuid": "cce631d8-a5ef-4f97-8159-25cc7cae9630", 00:12:16.447 "is_configured": true, 00:12:16.447 "data_offset": 2048, 00:12:16.447 "data_size": 63488 00:12:16.447 }, 00:12:16.447 { 00:12:16.447 "name": "BaseBdev2", 00:12:16.447 "uuid": "92bc8753-917b-4821-b92d-783bcbd9abac", 00:12:16.447 "is_configured": true, 00:12:16.447 "data_offset": 2048, 00:12:16.447 "data_size": 63488 00:12:16.447 }, 00:12:16.447 { 00:12:16.447 "name": "BaseBdev3", 00:12:16.447 "uuid": "856a1406-7800-478d-a9f1-f3cf98d46fb8", 00:12:16.447 "is_configured": true, 00:12:16.447 "data_offset": 2048, 00:12:16.447 "data_size": 63488 00:12:16.447 }, 00:12:16.447 { 00:12:16.447 "name": "BaseBdev4", 00:12:16.447 "uuid": "745c4e97-483a-4026-a141-0e46714193f4", 00:12:16.447 "is_configured": true, 00:12:16.447 "data_offset": 2048, 00:12:16.447 "data_size": 63488 00:12:16.447 } 00:12:16.447 ] 00:12:16.447 }' 00:12:16.448 09:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.448 09:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.016 [2024-10-21 09:56:53.341311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.016 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.016 "name": "Existed_Raid", 00:12:17.016 "aliases": [ 00:12:17.016 "305703a4-8bbd-4123-a1db-4521eb3c4faa" 00:12:17.016 ], 00:12:17.016 "product_name": "Raid Volume", 00:12:17.016 "block_size": 512, 00:12:17.016 "num_blocks": 63488, 00:12:17.016 "uuid": "305703a4-8bbd-4123-a1db-4521eb3c4faa", 00:12:17.016 "assigned_rate_limits": { 00:12:17.016 "rw_ios_per_sec": 0, 00:12:17.016 "rw_mbytes_per_sec": 0, 00:12:17.016 "r_mbytes_per_sec": 0, 00:12:17.016 "w_mbytes_per_sec": 0 00:12:17.016 }, 00:12:17.016 "claimed": false, 00:12:17.017 "zoned": false, 00:12:17.017 "supported_io_types": { 00:12:17.017 "read": true, 00:12:17.017 "write": true, 00:12:17.017 "unmap": false, 00:12:17.017 "flush": false, 00:12:17.017 "reset": true, 00:12:17.017 "nvme_admin": false, 00:12:17.017 "nvme_io": false, 00:12:17.017 "nvme_io_md": false, 00:12:17.017 "write_zeroes": true, 00:12:17.017 "zcopy": false, 00:12:17.017 "get_zone_info": false, 00:12:17.017 "zone_management": false, 00:12:17.017 "zone_append": false, 00:12:17.017 "compare": false, 00:12:17.017 "compare_and_write": false, 00:12:17.017 "abort": false, 00:12:17.017 "seek_hole": false, 00:12:17.017 "seek_data": false, 00:12:17.017 "copy": false, 00:12:17.017 "nvme_iov_md": false 00:12:17.017 }, 00:12:17.017 "memory_domains": [ 00:12:17.017 { 00:12:17.017 "dma_device_id": "system", 00:12:17.017 "dma_device_type": 1 00:12:17.017 }, 00:12:17.017 { 00:12:17.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.017 "dma_device_type": 2 00:12:17.017 }, 00:12:17.017 { 00:12:17.017 "dma_device_id": "system", 00:12:17.017 "dma_device_type": 1 00:12:17.017 }, 00:12:17.017 { 00:12:17.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.017 "dma_device_type": 2 00:12:17.017 }, 00:12:17.017 { 00:12:17.017 "dma_device_id": "system", 00:12:17.017 "dma_device_type": 1 00:12:17.017 }, 00:12:17.017 { 00:12:17.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.017 "dma_device_type": 2 00:12:17.017 }, 00:12:17.017 { 00:12:17.017 "dma_device_id": "system", 00:12:17.017 "dma_device_type": 1 00:12:17.017 }, 00:12:17.017 { 00:12:17.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.017 "dma_device_type": 2 00:12:17.017 } 00:12:17.017 ], 00:12:17.017 "driver_specific": { 00:12:17.017 "raid": { 00:12:17.017 "uuid": "305703a4-8bbd-4123-a1db-4521eb3c4faa", 00:12:17.017 "strip_size_kb": 0, 00:12:17.017 "state": "online", 00:12:17.017 "raid_level": "raid1", 00:12:17.017 "superblock": true, 00:12:17.017 "num_base_bdevs": 4, 00:12:17.017 "num_base_bdevs_discovered": 4, 00:12:17.017 "num_base_bdevs_operational": 4, 00:12:17.017 "base_bdevs_list": [ 00:12:17.017 { 00:12:17.017 "name": "BaseBdev1", 00:12:17.017 "uuid": "cce631d8-a5ef-4f97-8159-25cc7cae9630", 00:12:17.017 "is_configured": true, 00:12:17.017 "data_offset": 2048, 00:12:17.017 "data_size": 63488 00:12:17.017 }, 00:12:17.017 { 00:12:17.017 "name": "BaseBdev2", 00:12:17.017 "uuid": "92bc8753-917b-4821-b92d-783bcbd9abac", 00:12:17.017 "is_configured": true, 00:12:17.017 "data_offset": 2048, 00:12:17.017 "data_size": 63488 00:12:17.017 }, 00:12:17.017 { 00:12:17.017 "name": "BaseBdev3", 00:12:17.017 "uuid": "856a1406-7800-478d-a9f1-f3cf98d46fb8", 00:12:17.017 "is_configured": true, 00:12:17.017 "data_offset": 2048, 00:12:17.017 "data_size": 63488 00:12:17.017 }, 00:12:17.017 { 00:12:17.017 "name": "BaseBdev4", 00:12:17.017 "uuid": "745c4e97-483a-4026-a141-0e46714193f4", 00:12:17.017 "is_configured": true, 00:12:17.017 "data_offset": 2048, 00:12:17.017 "data_size": 63488 00:12:17.017 } 00:12:17.017 ] 00:12:17.017 } 00:12:17.017 } 00:12:17.017 }' 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:17.017 BaseBdev2 00:12:17.017 BaseBdev3 00:12:17.017 BaseBdev4' 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.017 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.277 [2024-10-21 09:56:53.672405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.277 "name": "Existed_Raid", 00:12:17.277 "uuid": "305703a4-8bbd-4123-a1db-4521eb3c4faa", 00:12:17.277 "strip_size_kb": 0, 00:12:17.277 "state": "online", 00:12:17.277 "raid_level": "raid1", 00:12:17.277 "superblock": true, 00:12:17.277 "num_base_bdevs": 4, 00:12:17.277 "num_base_bdevs_discovered": 3, 00:12:17.277 "num_base_bdevs_operational": 3, 00:12:17.277 "base_bdevs_list": [ 00:12:17.277 { 00:12:17.277 "name": null, 00:12:17.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.277 "is_configured": false, 00:12:17.277 "data_offset": 0, 00:12:17.277 "data_size": 63488 00:12:17.277 }, 00:12:17.277 { 00:12:17.277 "name": "BaseBdev2", 00:12:17.277 "uuid": "92bc8753-917b-4821-b92d-783bcbd9abac", 00:12:17.277 "is_configured": true, 00:12:17.277 "data_offset": 2048, 00:12:17.277 "data_size": 63488 00:12:17.277 }, 00:12:17.277 { 00:12:17.277 "name": "BaseBdev3", 00:12:17.277 "uuid": "856a1406-7800-478d-a9f1-f3cf98d46fb8", 00:12:17.277 "is_configured": true, 00:12:17.277 "data_offset": 2048, 00:12:17.277 "data_size": 63488 00:12:17.277 }, 00:12:17.277 { 00:12:17.277 "name": "BaseBdev4", 00:12:17.277 "uuid": "745c4e97-483a-4026-a141-0e46714193f4", 00:12:17.277 "is_configured": true, 00:12:17.277 "data_offset": 2048, 00:12:17.277 "data_size": 63488 00:12:17.277 } 00:12:17.277 ] 00:12:17.277 }' 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.277 09:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.846 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.846 [2024-10-21 09:56:54.324895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.106 [2024-10-21 09:56:54.511997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.106 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.106 [2024-10-21 09:56:54.673551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:18.106 [2024-10-21 09:56:54.673802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.365 [2024-10-21 09:56:54.784056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.365 [2024-10-21 09:56:54.784227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.365 [2024-10-21 09:56:54.784275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:12:18.365 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.365 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.365 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.365 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.365 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.365 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.365 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.365 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.365 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:18.365 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.366 BaseBdev2 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.366 [ 00:12:18.366 { 00:12:18.366 "name": "BaseBdev2", 00:12:18.366 "aliases": [ 00:12:18.366 "7ddb7159-0a32-4735-a000-76d519ed5aa5" 00:12:18.366 ], 00:12:18.366 "product_name": "Malloc disk", 00:12:18.366 "block_size": 512, 00:12:18.366 "num_blocks": 65536, 00:12:18.366 "uuid": "7ddb7159-0a32-4735-a000-76d519ed5aa5", 00:12:18.366 "assigned_rate_limits": { 00:12:18.366 "rw_ios_per_sec": 0, 00:12:18.366 "rw_mbytes_per_sec": 0, 00:12:18.366 "r_mbytes_per_sec": 0, 00:12:18.366 "w_mbytes_per_sec": 0 00:12:18.366 }, 00:12:18.366 "claimed": false, 00:12:18.366 "zoned": false, 00:12:18.366 "supported_io_types": { 00:12:18.366 "read": true, 00:12:18.366 "write": true, 00:12:18.366 "unmap": true, 00:12:18.366 "flush": true, 00:12:18.366 "reset": true, 00:12:18.366 "nvme_admin": false, 00:12:18.366 "nvme_io": false, 00:12:18.366 "nvme_io_md": false, 00:12:18.366 "write_zeroes": true, 00:12:18.366 "zcopy": true, 00:12:18.366 "get_zone_info": false, 00:12:18.366 "zone_management": false, 00:12:18.366 "zone_append": false, 00:12:18.366 "compare": false, 00:12:18.366 "compare_and_write": false, 00:12:18.366 "abort": true, 00:12:18.366 "seek_hole": false, 00:12:18.366 "seek_data": false, 00:12:18.366 "copy": true, 00:12:18.366 "nvme_iov_md": false 00:12:18.366 }, 00:12:18.366 "memory_domains": [ 00:12:18.366 { 00:12:18.366 "dma_device_id": "system", 00:12:18.366 "dma_device_type": 1 00:12:18.366 }, 00:12:18.366 { 00:12:18.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.366 "dma_device_type": 2 00:12:18.366 } 00:12:18.366 ], 00:12:18.366 "driver_specific": {} 00:12:18.366 } 00:12:18.366 ] 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.366 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.627 BaseBdev3 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.627 09:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.627 [ 00:12:18.627 { 00:12:18.627 "name": "BaseBdev3", 00:12:18.627 "aliases": [ 00:12:18.627 "e96996c8-0aee-408d-b6d8-1068e64fa74d" 00:12:18.627 ], 00:12:18.627 "product_name": "Malloc disk", 00:12:18.627 "block_size": 512, 00:12:18.627 "num_blocks": 65536, 00:12:18.627 "uuid": "e96996c8-0aee-408d-b6d8-1068e64fa74d", 00:12:18.627 "assigned_rate_limits": { 00:12:18.627 "rw_ios_per_sec": 0, 00:12:18.627 "rw_mbytes_per_sec": 0, 00:12:18.627 "r_mbytes_per_sec": 0, 00:12:18.627 "w_mbytes_per_sec": 0 00:12:18.627 }, 00:12:18.627 "claimed": false, 00:12:18.627 "zoned": false, 00:12:18.627 "supported_io_types": { 00:12:18.627 "read": true, 00:12:18.627 "write": true, 00:12:18.627 "unmap": true, 00:12:18.627 "flush": true, 00:12:18.627 "reset": true, 00:12:18.627 "nvme_admin": false, 00:12:18.627 "nvme_io": false, 00:12:18.627 "nvme_io_md": false, 00:12:18.627 "write_zeroes": true, 00:12:18.627 "zcopy": true, 00:12:18.627 "get_zone_info": false, 00:12:18.627 "zone_management": false, 00:12:18.627 "zone_append": false, 00:12:18.627 "compare": false, 00:12:18.627 "compare_and_write": false, 00:12:18.627 "abort": true, 00:12:18.627 "seek_hole": false, 00:12:18.627 "seek_data": false, 00:12:18.627 "copy": true, 00:12:18.627 "nvme_iov_md": false 00:12:18.627 }, 00:12:18.627 "memory_domains": [ 00:12:18.627 { 00:12:18.627 "dma_device_id": "system", 00:12:18.627 "dma_device_type": 1 00:12:18.627 }, 00:12:18.627 { 00:12:18.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.627 "dma_device_type": 2 00:12:18.627 } 00:12:18.627 ], 00:12:18.627 "driver_specific": {} 00:12:18.627 } 00:12:18.627 ] 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.627 BaseBdev4 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.627 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.627 [ 00:12:18.627 { 00:12:18.627 "name": "BaseBdev4", 00:12:18.627 "aliases": [ 00:12:18.627 "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484" 00:12:18.627 ], 00:12:18.627 "product_name": "Malloc disk", 00:12:18.627 "block_size": 512, 00:12:18.627 "num_blocks": 65536, 00:12:18.627 "uuid": "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484", 00:12:18.627 "assigned_rate_limits": { 00:12:18.627 "rw_ios_per_sec": 0, 00:12:18.627 "rw_mbytes_per_sec": 0, 00:12:18.627 "r_mbytes_per_sec": 0, 00:12:18.627 "w_mbytes_per_sec": 0 00:12:18.627 }, 00:12:18.627 "claimed": false, 00:12:18.627 "zoned": false, 00:12:18.628 "supported_io_types": { 00:12:18.628 "read": true, 00:12:18.628 "write": true, 00:12:18.628 "unmap": true, 00:12:18.628 "flush": true, 00:12:18.628 "reset": true, 00:12:18.628 "nvme_admin": false, 00:12:18.628 "nvme_io": false, 00:12:18.628 "nvme_io_md": false, 00:12:18.628 "write_zeroes": true, 00:12:18.628 "zcopy": true, 00:12:18.628 "get_zone_info": false, 00:12:18.628 "zone_management": false, 00:12:18.628 "zone_append": false, 00:12:18.628 "compare": false, 00:12:18.628 "compare_and_write": false, 00:12:18.628 "abort": true, 00:12:18.628 "seek_hole": false, 00:12:18.628 "seek_data": false, 00:12:18.628 "copy": true, 00:12:18.628 "nvme_iov_md": false 00:12:18.628 }, 00:12:18.628 "memory_domains": [ 00:12:18.628 { 00:12:18.628 "dma_device_id": "system", 00:12:18.628 "dma_device_type": 1 00:12:18.628 }, 00:12:18.628 { 00:12:18.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.628 "dma_device_type": 2 00:12:18.628 } 00:12:18.628 ], 00:12:18.628 "driver_specific": {} 00:12:18.628 } 00:12:18.628 ] 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.628 [2024-10-21 09:56:55.113514] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.628 [2024-10-21 09:56:55.113668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.628 [2024-10-21 09:56:55.113714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.628 [2024-10-21 09:56:55.115929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.628 [2024-10-21 09:56:55.116030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.628 "name": "Existed_Raid", 00:12:18.628 "uuid": "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb", 00:12:18.628 "strip_size_kb": 0, 00:12:18.628 "state": "configuring", 00:12:18.628 "raid_level": "raid1", 00:12:18.628 "superblock": true, 00:12:18.628 "num_base_bdevs": 4, 00:12:18.628 "num_base_bdevs_discovered": 3, 00:12:18.628 "num_base_bdevs_operational": 4, 00:12:18.628 "base_bdevs_list": [ 00:12:18.628 { 00:12:18.628 "name": "BaseBdev1", 00:12:18.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.628 "is_configured": false, 00:12:18.628 "data_offset": 0, 00:12:18.628 "data_size": 0 00:12:18.628 }, 00:12:18.628 { 00:12:18.628 "name": "BaseBdev2", 00:12:18.628 "uuid": "7ddb7159-0a32-4735-a000-76d519ed5aa5", 00:12:18.628 "is_configured": true, 00:12:18.628 "data_offset": 2048, 00:12:18.628 "data_size": 63488 00:12:18.628 }, 00:12:18.628 { 00:12:18.628 "name": "BaseBdev3", 00:12:18.628 "uuid": "e96996c8-0aee-408d-b6d8-1068e64fa74d", 00:12:18.628 "is_configured": true, 00:12:18.628 "data_offset": 2048, 00:12:18.628 "data_size": 63488 00:12:18.628 }, 00:12:18.628 { 00:12:18.628 "name": "BaseBdev4", 00:12:18.628 "uuid": "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484", 00:12:18.628 "is_configured": true, 00:12:18.628 "data_offset": 2048, 00:12:18.628 "data_size": 63488 00:12:18.628 } 00:12:18.628 ] 00:12:18.628 }' 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.628 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.199 [2024-10-21 09:56:55.580802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.199 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.199 "name": "Existed_Raid", 00:12:19.199 "uuid": "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb", 00:12:19.199 "strip_size_kb": 0, 00:12:19.199 "state": "configuring", 00:12:19.199 "raid_level": "raid1", 00:12:19.199 "superblock": true, 00:12:19.199 "num_base_bdevs": 4, 00:12:19.199 "num_base_bdevs_discovered": 2, 00:12:19.199 "num_base_bdevs_operational": 4, 00:12:19.200 "base_bdevs_list": [ 00:12:19.200 { 00:12:19.200 "name": "BaseBdev1", 00:12:19.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.200 "is_configured": false, 00:12:19.200 "data_offset": 0, 00:12:19.200 "data_size": 0 00:12:19.200 }, 00:12:19.200 { 00:12:19.200 "name": null, 00:12:19.200 "uuid": "7ddb7159-0a32-4735-a000-76d519ed5aa5", 00:12:19.200 "is_configured": false, 00:12:19.200 "data_offset": 0, 00:12:19.200 "data_size": 63488 00:12:19.200 }, 00:12:19.200 { 00:12:19.200 "name": "BaseBdev3", 00:12:19.200 "uuid": "e96996c8-0aee-408d-b6d8-1068e64fa74d", 00:12:19.200 "is_configured": true, 00:12:19.200 "data_offset": 2048, 00:12:19.200 "data_size": 63488 00:12:19.200 }, 00:12:19.200 { 00:12:19.200 "name": "BaseBdev4", 00:12:19.200 "uuid": "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484", 00:12:19.200 "is_configured": true, 00:12:19.200 "data_offset": 2048, 00:12:19.200 "data_size": 63488 00:12:19.200 } 00:12:19.200 ] 00:12:19.200 }' 00:12:19.200 09:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.200 09:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.459 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:19.459 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.459 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.459 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.459 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.459 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:19.459 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.717 [2024-10-21 09:56:56.103862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.717 BaseBdev1 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.717 [ 00:12:19.717 { 00:12:19.717 "name": "BaseBdev1", 00:12:19.717 "aliases": [ 00:12:19.717 "93bed761-dcc6-49ac-a022-1585aab02121" 00:12:19.717 ], 00:12:19.717 "product_name": "Malloc disk", 00:12:19.717 "block_size": 512, 00:12:19.717 "num_blocks": 65536, 00:12:19.717 "uuid": "93bed761-dcc6-49ac-a022-1585aab02121", 00:12:19.717 "assigned_rate_limits": { 00:12:19.717 "rw_ios_per_sec": 0, 00:12:19.717 "rw_mbytes_per_sec": 0, 00:12:19.717 "r_mbytes_per_sec": 0, 00:12:19.717 "w_mbytes_per_sec": 0 00:12:19.717 }, 00:12:19.717 "claimed": true, 00:12:19.717 "claim_type": "exclusive_write", 00:12:19.717 "zoned": false, 00:12:19.717 "supported_io_types": { 00:12:19.717 "read": true, 00:12:19.717 "write": true, 00:12:19.717 "unmap": true, 00:12:19.717 "flush": true, 00:12:19.717 "reset": true, 00:12:19.717 "nvme_admin": false, 00:12:19.717 "nvme_io": false, 00:12:19.717 "nvme_io_md": false, 00:12:19.717 "write_zeroes": true, 00:12:19.717 "zcopy": true, 00:12:19.717 "get_zone_info": false, 00:12:19.717 "zone_management": false, 00:12:19.717 "zone_append": false, 00:12:19.717 "compare": false, 00:12:19.717 "compare_and_write": false, 00:12:19.717 "abort": true, 00:12:19.717 "seek_hole": false, 00:12:19.717 "seek_data": false, 00:12:19.717 "copy": true, 00:12:19.717 "nvme_iov_md": false 00:12:19.717 }, 00:12:19.717 "memory_domains": [ 00:12:19.717 { 00:12:19.717 "dma_device_id": "system", 00:12:19.717 "dma_device_type": 1 00:12:19.717 }, 00:12:19.717 { 00:12:19.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.717 "dma_device_type": 2 00:12:19.717 } 00:12:19.717 ], 00:12:19.717 "driver_specific": {} 00:12:19.717 } 00:12:19.717 ] 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.717 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.717 "name": "Existed_Raid", 00:12:19.717 "uuid": "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb", 00:12:19.717 "strip_size_kb": 0, 00:12:19.717 "state": "configuring", 00:12:19.717 "raid_level": "raid1", 00:12:19.717 "superblock": true, 00:12:19.717 "num_base_bdevs": 4, 00:12:19.717 "num_base_bdevs_discovered": 3, 00:12:19.717 "num_base_bdevs_operational": 4, 00:12:19.717 "base_bdevs_list": [ 00:12:19.717 { 00:12:19.717 "name": "BaseBdev1", 00:12:19.717 "uuid": "93bed761-dcc6-49ac-a022-1585aab02121", 00:12:19.717 "is_configured": true, 00:12:19.717 "data_offset": 2048, 00:12:19.717 "data_size": 63488 00:12:19.717 }, 00:12:19.717 { 00:12:19.717 "name": null, 00:12:19.717 "uuid": "7ddb7159-0a32-4735-a000-76d519ed5aa5", 00:12:19.717 "is_configured": false, 00:12:19.717 "data_offset": 0, 00:12:19.717 "data_size": 63488 00:12:19.717 }, 00:12:19.717 { 00:12:19.717 "name": "BaseBdev3", 00:12:19.717 "uuid": "e96996c8-0aee-408d-b6d8-1068e64fa74d", 00:12:19.717 "is_configured": true, 00:12:19.717 "data_offset": 2048, 00:12:19.717 "data_size": 63488 00:12:19.717 }, 00:12:19.717 { 00:12:19.718 "name": "BaseBdev4", 00:12:19.718 "uuid": "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484", 00:12:19.718 "is_configured": true, 00:12:19.718 "data_offset": 2048, 00:12:19.718 "data_size": 63488 00:12:19.718 } 00:12:19.718 ] 00:12:19.718 }' 00:12:19.718 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.718 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.283 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:20.283 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.284 [2024-10-21 09:56:56.666996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.284 "name": "Existed_Raid", 00:12:20.284 "uuid": "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb", 00:12:20.284 "strip_size_kb": 0, 00:12:20.284 "state": "configuring", 00:12:20.284 "raid_level": "raid1", 00:12:20.284 "superblock": true, 00:12:20.284 "num_base_bdevs": 4, 00:12:20.284 "num_base_bdevs_discovered": 2, 00:12:20.284 "num_base_bdevs_operational": 4, 00:12:20.284 "base_bdevs_list": [ 00:12:20.284 { 00:12:20.284 "name": "BaseBdev1", 00:12:20.284 "uuid": "93bed761-dcc6-49ac-a022-1585aab02121", 00:12:20.284 "is_configured": true, 00:12:20.284 "data_offset": 2048, 00:12:20.284 "data_size": 63488 00:12:20.284 }, 00:12:20.284 { 00:12:20.284 "name": null, 00:12:20.284 "uuid": "7ddb7159-0a32-4735-a000-76d519ed5aa5", 00:12:20.284 "is_configured": false, 00:12:20.284 "data_offset": 0, 00:12:20.284 "data_size": 63488 00:12:20.284 }, 00:12:20.284 { 00:12:20.284 "name": null, 00:12:20.284 "uuid": "e96996c8-0aee-408d-b6d8-1068e64fa74d", 00:12:20.284 "is_configured": false, 00:12:20.284 "data_offset": 0, 00:12:20.284 "data_size": 63488 00:12:20.284 }, 00:12:20.284 { 00:12:20.284 "name": "BaseBdev4", 00:12:20.284 "uuid": "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484", 00:12:20.284 "is_configured": true, 00:12:20.284 "data_offset": 2048, 00:12:20.284 "data_size": 63488 00:12:20.284 } 00:12:20.284 ] 00:12:20.284 }' 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.284 09:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.849 [2024-10-21 09:56:57.214279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.849 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.850 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.850 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.850 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.850 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.850 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.850 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.850 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.850 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.850 "name": "Existed_Raid", 00:12:20.850 "uuid": "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb", 00:12:20.850 "strip_size_kb": 0, 00:12:20.850 "state": "configuring", 00:12:20.850 "raid_level": "raid1", 00:12:20.850 "superblock": true, 00:12:20.850 "num_base_bdevs": 4, 00:12:20.850 "num_base_bdevs_discovered": 3, 00:12:20.850 "num_base_bdevs_operational": 4, 00:12:20.850 "base_bdevs_list": [ 00:12:20.850 { 00:12:20.850 "name": "BaseBdev1", 00:12:20.850 "uuid": "93bed761-dcc6-49ac-a022-1585aab02121", 00:12:20.850 "is_configured": true, 00:12:20.850 "data_offset": 2048, 00:12:20.850 "data_size": 63488 00:12:20.850 }, 00:12:20.850 { 00:12:20.850 "name": null, 00:12:20.850 "uuid": "7ddb7159-0a32-4735-a000-76d519ed5aa5", 00:12:20.850 "is_configured": false, 00:12:20.850 "data_offset": 0, 00:12:20.850 "data_size": 63488 00:12:20.850 }, 00:12:20.850 { 00:12:20.850 "name": "BaseBdev3", 00:12:20.850 "uuid": "e96996c8-0aee-408d-b6d8-1068e64fa74d", 00:12:20.850 "is_configured": true, 00:12:20.850 "data_offset": 2048, 00:12:20.850 "data_size": 63488 00:12:20.850 }, 00:12:20.850 { 00:12:20.850 "name": "BaseBdev4", 00:12:20.850 "uuid": "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484", 00:12:20.850 "is_configured": true, 00:12:20.850 "data_offset": 2048, 00:12:20.850 "data_size": 63488 00:12:20.850 } 00:12:20.850 ] 00:12:20.850 }' 00:12:20.850 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.850 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.122 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.122 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:21.122 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.122 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.122 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.123 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:21.123 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:21.123 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.123 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.401 [2024-10-21 09:56:57.709771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.401 "name": "Existed_Raid", 00:12:21.401 "uuid": "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb", 00:12:21.401 "strip_size_kb": 0, 00:12:21.401 "state": "configuring", 00:12:21.401 "raid_level": "raid1", 00:12:21.401 "superblock": true, 00:12:21.401 "num_base_bdevs": 4, 00:12:21.401 "num_base_bdevs_discovered": 2, 00:12:21.401 "num_base_bdevs_operational": 4, 00:12:21.401 "base_bdevs_list": [ 00:12:21.401 { 00:12:21.401 "name": null, 00:12:21.401 "uuid": "93bed761-dcc6-49ac-a022-1585aab02121", 00:12:21.401 "is_configured": false, 00:12:21.401 "data_offset": 0, 00:12:21.401 "data_size": 63488 00:12:21.401 }, 00:12:21.401 { 00:12:21.401 "name": null, 00:12:21.401 "uuid": "7ddb7159-0a32-4735-a000-76d519ed5aa5", 00:12:21.401 "is_configured": false, 00:12:21.401 "data_offset": 0, 00:12:21.401 "data_size": 63488 00:12:21.401 }, 00:12:21.401 { 00:12:21.401 "name": "BaseBdev3", 00:12:21.401 "uuid": "e96996c8-0aee-408d-b6d8-1068e64fa74d", 00:12:21.401 "is_configured": true, 00:12:21.401 "data_offset": 2048, 00:12:21.401 "data_size": 63488 00:12:21.401 }, 00:12:21.401 { 00:12:21.401 "name": "BaseBdev4", 00:12:21.401 "uuid": "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484", 00:12:21.401 "is_configured": true, 00:12:21.401 "data_offset": 2048, 00:12:21.401 "data_size": 63488 00:12:21.401 } 00:12:21.401 ] 00:12:21.401 }' 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.401 09:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.969 [2024-10-21 09:56:58.319219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.969 "name": "Existed_Raid", 00:12:21.969 "uuid": "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb", 00:12:21.969 "strip_size_kb": 0, 00:12:21.969 "state": "configuring", 00:12:21.969 "raid_level": "raid1", 00:12:21.969 "superblock": true, 00:12:21.969 "num_base_bdevs": 4, 00:12:21.969 "num_base_bdevs_discovered": 3, 00:12:21.969 "num_base_bdevs_operational": 4, 00:12:21.969 "base_bdevs_list": [ 00:12:21.969 { 00:12:21.969 "name": null, 00:12:21.969 "uuid": "93bed761-dcc6-49ac-a022-1585aab02121", 00:12:21.969 "is_configured": false, 00:12:21.969 "data_offset": 0, 00:12:21.969 "data_size": 63488 00:12:21.969 }, 00:12:21.969 { 00:12:21.969 "name": "BaseBdev2", 00:12:21.969 "uuid": "7ddb7159-0a32-4735-a000-76d519ed5aa5", 00:12:21.969 "is_configured": true, 00:12:21.969 "data_offset": 2048, 00:12:21.969 "data_size": 63488 00:12:21.969 }, 00:12:21.969 { 00:12:21.969 "name": "BaseBdev3", 00:12:21.969 "uuid": "e96996c8-0aee-408d-b6d8-1068e64fa74d", 00:12:21.969 "is_configured": true, 00:12:21.969 "data_offset": 2048, 00:12:21.969 "data_size": 63488 00:12:21.969 }, 00:12:21.969 { 00:12:21.969 "name": "BaseBdev4", 00:12:21.969 "uuid": "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484", 00:12:21.969 "is_configured": true, 00:12:21.969 "data_offset": 2048, 00:12:21.969 "data_size": 63488 00:12:21.969 } 00:12:21.969 ] 00:12:21.969 }' 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.969 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.227 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.227 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.227 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.227 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 93bed761-dcc6-49ac-a022-1585aab02121 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.485 [2024-10-21 09:56:58.954476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:22.485 [2024-10-21 09:56:58.955083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:22.485 [2024-10-21 09:56:58.955155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.485 [2024-10-21 09:56:58.955585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:22.485 NewBaseBdev 00:12:22.485 [2024-10-21 09:56:58.955857] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:22.485 [2024-10-21 09:56:58.955870] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:12:22.485 [2024-10-21 09:56:58.956051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.485 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.485 [ 00:12:22.485 { 00:12:22.485 "name": "NewBaseBdev", 00:12:22.485 "aliases": [ 00:12:22.485 "93bed761-dcc6-49ac-a022-1585aab02121" 00:12:22.485 ], 00:12:22.485 "product_name": "Malloc disk", 00:12:22.485 "block_size": 512, 00:12:22.485 "num_blocks": 65536, 00:12:22.485 "uuid": "93bed761-dcc6-49ac-a022-1585aab02121", 00:12:22.485 "assigned_rate_limits": { 00:12:22.485 "rw_ios_per_sec": 0, 00:12:22.485 "rw_mbytes_per_sec": 0, 00:12:22.485 "r_mbytes_per_sec": 0, 00:12:22.485 "w_mbytes_per_sec": 0 00:12:22.485 }, 00:12:22.485 "claimed": true, 00:12:22.485 "claim_type": "exclusive_write", 00:12:22.485 "zoned": false, 00:12:22.485 "supported_io_types": { 00:12:22.485 "read": true, 00:12:22.485 "write": true, 00:12:22.485 "unmap": true, 00:12:22.485 "flush": true, 00:12:22.485 "reset": true, 00:12:22.485 "nvme_admin": false, 00:12:22.485 "nvme_io": false, 00:12:22.485 "nvme_io_md": false, 00:12:22.485 "write_zeroes": true, 00:12:22.485 "zcopy": true, 00:12:22.485 "get_zone_info": false, 00:12:22.485 "zone_management": false, 00:12:22.485 "zone_append": false, 00:12:22.485 "compare": false, 00:12:22.485 "compare_and_write": false, 00:12:22.486 "abort": true, 00:12:22.486 "seek_hole": false, 00:12:22.486 "seek_data": false, 00:12:22.486 "copy": true, 00:12:22.486 "nvme_iov_md": false 00:12:22.486 }, 00:12:22.486 "memory_domains": [ 00:12:22.486 { 00:12:22.486 "dma_device_id": "system", 00:12:22.486 "dma_device_type": 1 00:12:22.486 }, 00:12:22.486 { 00:12:22.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.486 "dma_device_type": 2 00:12:22.486 } 00:12:22.486 ], 00:12:22.486 "driver_specific": {} 00:12:22.486 } 00:12:22.486 ] 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.486 09:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.486 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.486 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.486 "name": "Existed_Raid", 00:12:22.486 "uuid": "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb", 00:12:22.486 "strip_size_kb": 0, 00:12:22.486 "state": "online", 00:12:22.486 "raid_level": "raid1", 00:12:22.486 "superblock": true, 00:12:22.486 "num_base_bdevs": 4, 00:12:22.486 "num_base_bdevs_discovered": 4, 00:12:22.486 "num_base_bdevs_operational": 4, 00:12:22.486 "base_bdevs_list": [ 00:12:22.486 { 00:12:22.486 "name": "NewBaseBdev", 00:12:22.486 "uuid": "93bed761-dcc6-49ac-a022-1585aab02121", 00:12:22.486 "is_configured": true, 00:12:22.486 "data_offset": 2048, 00:12:22.486 "data_size": 63488 00:12:22.486 }, 00:12:22.486 { 00:12:22.486 "name": "BaseBdev2", 00:12:22.486 "uuid": "7ddb7159-0a32-4735-a000-76d519ed5aa5", 00:12:22.486 "is_configured": true, 00:12:22.486 "data_offset": 2048, 00:12:22.486 "data_size": 63488 00:12:22.486 }, 00:12:22.486 { 00:12:22.486 "name": "BaseBdev3", 00:12:22.486 "uuid": "e96996c8-0aee-408d-b6d8-1068e64fa74d", 00:12:22.486 "is_configured": true, 00:12:22.486 "data_offset": 2048, 00:12:22.486 "data_size": 63488 00:12:22.486 }, 00:12:22.486 { 00:12:22.486 "name": "BaseBdev4", 00:12:22.486 "uuid": "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484", 00:12:22.486 "is_configured": true, 00:12:22.486 "data_offset": 2048, 00:12:22.486 "data_size": 63488 00:12:22.486 } 00:12:22.486 ] 00:12:22.486 }' 00:12:22.486 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.486 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.054 [2024-10-21 09:56:59.474205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.054 "name": "Existed_Raid", 00:12:23.054 "aliases": [ 00:12:23.054 "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb" 00:12:23.054 ], 00:12:23.054 "product_name": "Raid Volume", 00:12:23.054 "block_size": 512, 00:12:23.054 "num_blocks": 63488, 00:12:23.054 "uuid": "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb", 00:12:23.054 "assigned_rate_limits": { 00:12:23.054 "rw_ios_per_sec": 0, 00:12:23.054 "rw_mbytes_per_sec": 0, 00:12:23.054 "r_mbytes_per_sec": 0, 00:12:23.054 "w_mbytes_per_sec": 0 00:12:23.054 }, 00:12:23.054 "claimed": false, 00:12:23.054 "zoned": false, 00:12:23.054 "supported_io_types": { 00:12:23.054 "read": true, 00:12:23.054 "write": true, 00:12:23.054 "unmap": false, 00:12:23.054 "flush": false, 00:12:23.054 "reset": true, 00:12:23.054 "nvme_admin": false, 00:12:23.054 "nvme_io": false, 00:12:23.054 "nvme_io_md": false, 00:12:23.054 "write_zeroes": true, 00:12:23.054 "zcopy": false, 00:12:23.054 "get_zone_info": false, 00:12:23.054 "zone_management": false, 00:12:23.054 "zone_append": false, 00:12:23.054 "compare": false, 00:12:23.054 "compare_and_write": false, 00:12:23.054 "abort": false, 00:12:23.054 "seek_hole": false, 00:12:23.054 "seek_data": false, 00:12:23.054 "copy": false, 00:12:23.054 "nvme_iov_md": false 00:12:23.054 }, 00:12:23.054 "memory_domains": [ 00:12:23.054 { 00:12:23.054 "dma_device_id": "system", 00:12:23.054 "dma_device_type": 1 00:12:23.054 }, 00:12:23.054 { 00:12:23.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.054 "dma_device_type": 2 00:12:23.054 }, 00:12:23.054 { 00:12:23.054 "dma_device_id": "system", 00:12:23.054 "dma_device_type": 1 00:12:23.054 }, 00:12:23.054 { 00:12:23.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.054 "dma_device_type": 2 00:12:23.054 }, 00:12:23.054 { 00:12:23.054 "dma_device_id": "system", 00:12:23.054 "dma_device_type": 1 00:12:23.054 }, 00:12:23.054 { 00:12:23.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.054 "dma_device_type": 2 00:12:23.054 }, 00:12:23.054 { 00:12:23.054 "dma_device_id": "system", 00:12:23.054 "dma_device_type": 1 00:12:23.054 }, 00:12:23.054 { 00:12:23.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.054 "dma_device_type": 2 00:12:23.054 } 00:12:23.054 ], 00:12:23.054 "driver_specific": { 00:12:23.054 "raid": { 00:12:23.054 "uuid": "e1b4ebf7-bafd-4f88-b7d5-a0ab78faf0fb", 00:12:23.054 "strip_size_kb": 0, 00:12:23.054 "state": "online", 00:12:23.054 "raid_level": "raid1", 00:12:23.054 "superblock": true, 00:12:23.054 "num_base_bdevs": 4, 00:12:23.054 "num_base_bdevs_discovered": 4, 00:12:23.054 "num_base_bdevs_operational": 4, 00:12:23.054 "base_bdevs_list": [ 00:12:23.054 { 00:12:23.054 "name": "NewBaseBdev", 00:12:23.054 "uuid": "93bed761-dcc6-49ac-a022-1585aab02121", 00:12:23.054 "is_configured": true, 00:12:23.054 "data_offset": 2048, 00:12:23.054 "data_size": 63488 00:12:23.054 }, 00:12:23.054 { 00:12:23.054 "name": "BaseBdev2", 00:12:23.054 "uuid": "7ddb7159-0a32-4735-a000-76d519ed5aa5", 00:12:23.054 "is_configured": true, 00:12:23.054 "data_offset": 2048, 00:12:23.054 "data_size": 63488 00:12:23.054 }, 00:12:23.054 { 00:12:23.054 "name": "BaseBdev3", 00:12:23.054 "uuid": "e96996c8-0aee-408d-b6d8-1068e64fa74d", 00:12:23.054 "is_configured": true, 00:12:23.054 "data_offset": 2048, 00:12:23.054 "data_size": 63488 00:12:23.054 }, 00:12:23.054 { 00:12:23.054 "name": "BaseBdev4", 00:12:23.054 "uuid": "9aa5a19e-30fe-4ab3-95b7-e4d634d3f484", 00:12:23.054 "is_configured": true, 00:12:23.054 "data_offset": 2048, 00:12:23.054 "data_size": 63488 00:12:23.054 } 00:12:23.054 ] 00:12:23.054 } 00:12:23.054 } 00:12:23.054 }' 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:23.054 BaseBdev2 00:12:23.054 BaseBdev3 00:12:23.054 BaseBdev4' 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.054 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.314 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.315 [2024-10-21 09:56:59.821203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.315 [2024-10-21 09:56:59.821352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.315 [2024-10-21 09:56:59.821506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.315 [2024-10-21 09:56:59.821912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.315 [2024-10-21 09:56:59.821981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73450 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73450 ']' 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73450 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73450 00:12:23.315 killing process with pid 73450 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73450' 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73450 00:12:23.315 [2024-10-21 09:56:59.868040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.315 09:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73450 00:12:23.881 [2024-10-21 09:57:00.370508] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.257 09:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:25.257 00:12:25.257 real 0m12.691s 00:12:25.257 user 0m19.673s 00:12:25.257 sys 0m2.432s 00:12:25.257 09:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.257 09:57:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.257 ************************************ 00:12:25.257 END TEST raid_state_function_test_sb 00:12:25.257 ************************************ 00:12:25.257 09:57:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:25.257 09:57:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:25.257 09:57:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.257 09:57:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.257 ************************************ 00:12:25.257 START TEST raid_superblock_test 00:12:25.257 ************************************ 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74126 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74126 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74126 ']' 00:12:25.257 09:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.516 09:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.516 09:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.516 09:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.516 09:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.516 [2024-10-21 09:57:01.955448] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:12:25.516 [2024-10-21 09:57:01.955774] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74126 ] 00:12:25.774 [2024-10-21 09:57:02.130596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.774 [2024-10-21 09:57:02.298190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.032 [2024-10-21 09:57:02.581396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.032 [2024-10-21 09:57:02.581578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.291 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.549 malloc1 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.549 [2024-10-21 09:57:02.916046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.549 [2024-10-21 09:57:02.916255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.549 [2024-10-21 09:57:02.916307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:12:26.549 [2024-10-21 09:57:02.916339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.549 [2024-10-21 09:57:02.919026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.549 [2024-10-21 09:57:02.919126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.549 pt1 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.549 malloc2 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.549 [2024-10-21 09:57:02.988457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.549 [2024-10-21 09:57:02.988663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.549 [2024-10-21 09:57:02.988719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:12:26.549 [2024-10-21 09:57:02.988760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.549 [2024-10-21 09:57:02.991670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.549 [2024-10-21 09:57:02.991784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.549 pt2 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.549 09:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.549 malloc3 00:12:26.549 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.549 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:26.549 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.549 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.549 [2024-10-21 09:57:03.072335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:26.549 [2024-10-21 09:57:03.072525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.549 [2024-10-21 09:57:03.072558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:12:26.549 [2024-10-21 09:57:03.072580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.549 [2024-10-21 09:57:03.075227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.549 pt3 00:12:26.550 [2024-10-21 09:57:03.075337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.550 malloc4 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.550 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.550 [2024-10-21 09:57:03.143815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:26.550 [2024-10-21 09:57:03.143996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.550 [2024-10-21 09:57:03.144040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:26.550 [2024-10-21 09:57:03.144072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.808 [2024-10-21 09:57:03.146712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.808 [2024-10-21 09:57:03.146811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:26.808 pt4 00:12:26.808 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.808 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.808 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.809 [2024-10-21 09:57:03.155867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.809 [2024-10-21 09:57:03.158177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.809 [2024-10-21 09:57:03.158319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.809 [2024-10-21 09:57:03.158391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:26.809 [2024-10-21 09:57:03.158661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:12:26.809 [2024-10-21 09:57:03.158712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.809 [2024-10-21 09:57:03.159068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:26.809 [2024-10-21 09:57:03.159287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:12:26.809 [2024-10-21 09:57:03.159303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:12:26.809 [2024-10-21 09:57:03.159519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.809 "name": "raid_bdev1", 00:12:26.809 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:26.809 "strip_size_kb": 0, 00:12:26.809 "state": "online", 00:12:26.809 "raid_level": "raid1", 00:12:26.809 "superblock": true, 00:12:26.809 "num_base_bdevs": 4, 00:12:26.809 "num_base_bdevs_discovered": 4, 00:12:26.809 "num_base_bdevs_operational": 4, 00:12:26.809 "base_bdevs_list": [ 00:12:26.809 { 00:12:26.809 "name": "pt1", 00:12:26.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.809 "is_configured": true, 00:12:26.809 "data_offset": 2048, 00:12:26.809 "data_size": 63488 00:12:26.809 }, 00:12:26.809 { 00:12:26.809 "name": "pt2", 00:12:26.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.809 "is_configured": true, 00:12:26.809 "data_offset": 2048, 00:12:26.809 "data_size": 63488 00:12:26.809 }, 00:12:26.809 { 00:12:26.809 "name": "pt3", 00:12:26.809 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.809 "is_configured": true, 00:12:26.809 "data_offset": 2048, 00:12:26.809 "data_size": 63488 00:12:26.809 }, 00:12:26.809 { 00:12:26.809 "name": "pt4", 00:12:26.809 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.809 "is_configured": true, 00:12:26.809 "data_offset": 2048, 00:12:26.809 "data_size": 63488 00:12:26.809 } 00:12:26.809 ] 00:12:26.809 }' 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.809 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.067 [2024-10-21 09:57:03.587508] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.067 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.067 "name": "raid_bdev1", 00:12:27.067 "aliases": [ 00:12:27.067 "e711900a-f9cd-4093-b530-128d952278ca" 00:12:27.067 ], 00:12:27.067 "product_name": "Raid Volume", 00:12:27.067 "block_size": 512, 00:12:27.067 "num_blocks": 63488, 00:12:27.068 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:27.068 "assigned_rate_limits": { 00:12:27.068 "rw_ios_per_sec": 0, 00:12:27.068 "rw_mbytes_per_sec": 0, 00:12:27.068 "r_mbytes_per_sec": 0, 00:12:27.068 "w_mbytes_per_sec": 0 00:12:27.068 }, 00:12:27.068 "claimed": false, 00:12:27.068 "zoned": false, 00:12:27.068 "supported_io_types": { 00:12:27.068 "read": true, 00:12:27.068 "write": true, 00:12:27.068 "unmap": false, 00:12:27.068 "flush": false, 00:12:27.068 "reset": true, 00:12:27.068 "nvme_admin": false, 00:12:27.068 "nvme_io": false, 00:12:27.068 "nvme_io_md": false, 00:12:27.068 "write_zeroes": true, 00:12:27.068 "zcopy": false, 00:12:27.068 "get_zone_info": false, 00:12:27.068 "zone_management": false, 00:12:27.068 "zone_append": false, 00:12:27.068 "compare": false, 00:12:27.068 "compare_and_write": false, 00:12:27.068 "abort": false, 00:12:27.068 "seek_hole": false, 00:12:27.068 "seek_data": false, 00:12:27.068 "copy": false, 00:12:27.068 "nvme_iov_md": false 00:12:27.068 }, 00:12:27.068 "memory_domains": [ 00:12:27.068 { 00:12:27.068 "dma_device_id": "system", 00:12:27.068 "dma_device_type": 1 00:12:27.068 }, 00:12:27.068 { 00:12:27.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.068 "dma_device_type": 2 00:12:27.068 }, 00:12:27.068 { 00:12:27.068 "dma_device_id": "system", 00:12:27.068 "dma_device_type": 1 00:12:27.068 }, 00:12:27.068 { 00:12:27.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.068 "dma_device_type": 2 00:12:27.068 }, 00:12:27.068 { 00:12:27.068 "dma_device_id": "system", 00:12:27.068 "dma_device_type": 1 00:12:27.068 }, 00:12:27.068 { 00:12:27.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.068 "dma_device_type": 2 00:12:27.068 }, 00:12:27.068 { 00:12:27.068 "dma_device_id": "system", 00:12:27.068 "dma_device_type": 1 00:12:27.068 }, 00:12:27.068 { 00:12:27.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.068 "dma_device_type": 2 00:12:27.068 } 00:12:27.068 ], 00:12:27.068 "driver_specific": { 00:12:27.068 "raid": { 00:12:27.068 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:27.068 "strip_size_kb": 0, 00:12:27.068 "state": "online", 00:12:27.068 "raid_level": "raid1", 00:12:27.068 "superblock": true, 00:12:27.068 "num_base_bdevs": 4, 00:12:27.068 "num_base_bdevs_discovered": 4, 00:12:27.068 "num_base_bdevs_operational": 4, 00:12:27.068 "base_bdevs_list": [ 00:12:27.068 { 00:12:27.068 "name": "pt1", 00:12:27.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.068 "is_configured": true, 00:12:27.068 "data_offset": 2048, 00:12:27.068 "data_size": 63488 00:12:27.068 }, 00:12:27.068 { 00:12:27.068 "name": "pt2", 00:12:27.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.068 "is_configured": true, 00:12:27.068 "data_offset": 2048, 00:12:27.068 "data_size": 63488 00:12:27.068 }, 00:12:27.068 { 00:12:27.068 "name": "pt3", 00:12:27.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.068 "is_configured": true, 00:12:27.068 "data_offset": 2048, 00:12:27.068 "data_size": 63488 00:12:27.068 }, 00:12:27.068 { 00:12:27.068 "name": "pt4", 00:12:27.068 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.068 "is_configured": true, 00:12:27.068 "data_offset": 2048, 00:12:27.068 "data_size": 63488 00:12:27.068 } 00:12:27.068 ] 00:12:27.068 } 00:12:27.068 } 00:12:27.068 }' 00:12:27.068 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:27.326 pt2 00:12:27.326 pt3 00:12:27.326 pt4' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.326 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.585 [2024-10-21 09:57:03.935124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e711900a-f9cd-4093-b530-128d952278ca 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e711900a-f9cd-4093-b530-128d952278ca ']' 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.585 [2024-10-21 09:57:03.978780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.585 [2024-10-21 09:57:03.978922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.585 [2024-10-21 09:57:03.979069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.585 [2024-10-21 09:57:03.979176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.585 [2024-10-21 09:57:03.979194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.585 09:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.585 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.586 [2024-10-21 09:57:04.150828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:27.586 [2024-10-21 09:57:04.153387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:27.586 [2024-10-21 09:57:04.153505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:27.586 [2024-10-21 09:57:04.153587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:27.586 [2024-10-21 09:57:04.153692] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:27.586 [2024-10-21 09:57:04.153814] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:27.586 [2024-10-21 09:57:04.153880] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:27.586 [2024-10-21 09:57:04.153955] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:27.586 [2024-10-21 09:57:04.154014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.586 [2024-10-21 09:57:04.154055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:12:27.586 request: 00:12:27.586 { 00:12:27.586 "name": "raid_bdev1", 00:12:27.586 "raid_level": "raid1", 00:12:27.586 "base_bdevs": [ 00:12:27.586 "malloc1", 00:12:27.586 "malloc2", 00:12:27.586 "malloc3", 00:12:27.586 "malloc4" 00:12:27.586 ], 00:12:27.586 "superblock": false, 00:12:27.586 "method": "bdev_raid_create", 00:12:27.586 "req_id": 1 00:12:27.586 } 00:12:27.586 Got JSON-RPC error response 00:12:27.586 response: 00:12:27.586 { 00:12:27.586 "code": -17, 00:12:27.586 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:27.586 } 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:27.586 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.895 [2024-10-21 09:57:04.214798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:27.895 [2024-10-21 09:57:04.215001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.895 [2024-10-21 09:57:04.215039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:27.895 [2024-10-21 09:57:04.215073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.895 [2024-10-21 09:57:04.217792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.895 [2024-10-21 09:57:04.217895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:27.895 [2024-10-21 09:57:04.218029] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:27.895 [2024-10-21 09:57:04.218122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:27.895 pt1 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.895 "name": "raid_bdev1", 00:12:27.895 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:27.895 "strip_size_kb": 0, 00:12:27.895 "state": "configuring", 00:12:27.895 "raid_level": "raid1", 00:12:27.895 "superblock": true, 00:12:27.895 "num_base_bdevs": 4, 00:12:27.895 "num_base_bdevs_discovered": 1, 00:12:27.895 "num_base_bdevs_operational": 4, 00:12:27.895 "base_bdevs_list": [ 00:12:27.895 { 00:12:27.895 "name": "pt1", 00:12:27.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.895 "is_configured": true, 00:12:27.895 "data_offset": 2048, 00:12:27.895 "data_size": 63488 00:12:27.895 }, 00:12:27.895 { 00:12:27.895 "name": null, 00:12:27.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.895 "is_configured": false, 00:12:27.895 "data_offset": 2048, 00:12:27.895 "data_size": 63488 00:12:27.895 }, 00:12:27.895 { 00:12:27.895 "name": null, 00:12:27.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.895 "is_configured": false, 00:12:27.895 "data_offset": 2048, 00:12:27.895 "data_size": 63488 00:12:27.895 }, 00:12:27.895 { 00:12:27.895 "name": null, 00:12:27.895 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.895 "is_configured": false, 00:12:27.895 "data_offset": 2048, 00:12:27.895 "data_size": 63488 00:12:27.895 } 00:12:27.895 ] 00:12:27.895 }' 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.895 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.169 [2024-10-21 09:57:04.686770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.169 [2024-10-21 09:57:04.686971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.169 [2024-10-21 09:57:04.687016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:28.169 [2024-10-21 09:57:04.687054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.169 [2024-10-21 09:57:04.687729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.169 [2024-10-21 09:57:04.687826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.169 [2024-10-21 09:57:04.687969] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.169 [2024-10-21 09:57:04.688037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.169 pt2 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.169 [2024-10-21 09:57:04.698879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.169 "name": "raid_bdev1", 00:12:28.169 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:28.169 "strip_size_kb": 0, 00:12:28.169 "state": "configuring", 00:12:28.169 "raid_level": "raid1", 00:12:28.169 "superblock": true, 00:12:28.169 "num_base_bdevs": 4, 00:12:28.169 "num_base_bdevs_discovered": 1, 00:12:28.169 "num_base_bdevs_operational": 4, 00:12:28.169 "base_bdevs_list": [ 00:12:28.169 { 00:12:28.169 "name": "pt1", 00:12:28.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.169 "is_configured": true, 00:12:28.169 "data_offset": 2048, 00:12:28.169 "data_size": 63488 00:12:28.169 }, 00:12:28.169 { 00:12:28.169 "name": null, 00:12:28.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.169 "is_configured": false, 00:12:28.169 "data_offset": 0, 00:12:28.169 "data_size": 63488 00:12:28.169 }, 00:12:28.169 { 00:12:28.169 "name": null, 00:12:28.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.169 "is_configured": false, 00:12:28.169 "data_offset": 2048, 00:12:28.169 "data_size": 63488 00:12:28.169 }, 00:12:28.169 { 00:12:28.169 "name": null, 00:12:28.169 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.169 "is_configured": false, 00:12:28.169 "data_offset": 2048, 00:12:28.169 "data_size": 63488 00:12:28.169 } 00:12:28.169 ] 00:12:28.169 }' 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.169 09:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.738 [2024-10-21 09:57:05.206834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.738 [2024-10-21 09:57:05.207046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.738 [2024-10-21 09:57:05.207103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:28.738 [2024-10-21 09:57:05.207142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.738 [2024-10-21 09:57:05.207791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.738 [2024-10-21 09:57:05.207821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.738 [2024-10-21 09:57:05.207939] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.738 [2024-10-21 09:57:05.207967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.738 pt2 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.738 [2024-10-21 09:57:05.218820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:28.738 [2024-10-21 09:57:05.218997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.738 [2024-10-21 09:57:05.219046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:28.738 [2024-10-21 09:57:05.219080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.738 [2024-10-21 09:57:05.219687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.738 [2024-10-21 09:57:05.219758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:28.738 [2024-10-21 09:57:05.219906] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:28.738 [2024-10-21 09:57:05.219959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:28.738 pt3 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.738 [2024-10-21 09:57:05.230777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:28.738 [2024-10-21 09:57:05.230931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.738 [2024-10-21 09:57:05.230976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:28.738 [2024-10-21 09:57:05.231008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.738 [2024-10-21 09:57:05.231617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.738 [2024-10-21 09:57:05.231686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:28.738 [2024-10-21 09:57:05.231832] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:28.738 [2024-10-21 09:57:05.231884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:28.738 [2024-10-21 09:57:05.232074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:28.738 [2024-10-21 09:57:05.232084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:28.738 [2024-10-21 09:57:05.232402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:28.738 [2024-10-21 09:57:05.232598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:28.738 [2024-10-21 09:57:05.232615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:28.738 [2024-10-21 09:57:05.232762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.738 pt4 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.738 "name": "raid_bdev1", 00:12:28.738 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:28.738 "strip_size_kb": 0, 00:12:28.738 "state": "online", 00:12:28.738 "raid_level": "raid1", 00:12:28.738 "superblock": true, 00:12:28.738 "num_base_bdevs": 4, 00:12:28.738 "num_base_bdevs_discovered": 4, 00:12:28.738 "num_base_bdevs_operational": 4, 00:12:28.738 "base_bdevs_list": [ 00:12:28.738 { 00:12:28.738 "name": "pt1", 00:12:28.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.738 "is_configured": true, 00:12:28.738 "data_offset": 2048, 00:12:28.738 "data_size": 63488 00:12:28.738 }, 00:12:28.738 { 00:12:28.738 "name": "pt2", 00:12:28.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.738 "is_configured": true, 00:12:28.738 "data_offset": 2048, 00:12:28.738 "data_size": 63488 00:12:28.738 }, 00:12:28.738 { 00:12:28.738 "name": "pt3", 00:12:28.738 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.738 "is_configured": true, 00:12:28.738 "data_offset": 2048, 00:12:28.738 "data_size": 63488 00:12:28.738 }, 00:12:28.738 { 00:12:28.738 "name": "pt4", 00:12:28.738 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.738 "is_configured": true, 00:12:28.738 "data_offset": 2048, 00:12:28.738 "data_size": 63488 00:12:28.738 } 00:12:28.738 ] 00:12:28.738 }' 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.738 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.306 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:29.306 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:29.306 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:29.306 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:29.306 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:29.306 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:29.306 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:29.306 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.307 [2024-10-21 09:57:05.727155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:29.307 "name": "raid_bdev1", 00:12:29.307 "aliases": [ 00:12:29.307 "e711900a-f9cd-4093-b530-128d952278ca" 00:12:29.307 ], 00:12:29.307 "product_name": "Raid Volume", 00:12:29.307 "block_size": 512, 00:12:29.307 "num_blocks": 63488, 00:12:29.307 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:29.307 "assigned_rate_limits": { 00:12:29.307 "rw_ios_per_sec": 0, 00:12:29.307 "rw_mbytes_per_sec": 0, 00:12:29.307 "r_mbytes_per_sec": 0, 00:12:29.307 "w_mbytes_per_sec": 0 00:12:29.307 }, 00:12:29.307 "claimed": false, 00:12:29.307 "zoned": false, 00:12:29.307 "supported_io_types": { 00:12:29.307 "read": true, 00:12:29.307 "write": true, 00:12:29.307 "unmap": false, 00:12:29.307 "flush": false, 00:12:29.307 "reset": true, 00:12:29.307 "nvme_admin": false, 00:12:29.307 "nvme_io": false, 00:12:29.307 "nvme_io_md": false, 00:12:29.307 "write_zeroes": true, 00:12:29.307 "zcopy": false, 00:12:29.307 "get_zone_info": false, 00:12:29.307 "zone_management": false, 00:12:29.307 "zone_append": false, 00:12:29.307 "compare": false, 00:12:29.307 "compare_and_write": false, 00:12:29.307 "abort": false, 00:12:29.307 "seek_hole": false, 00:12:29.307 "seek_data": false, 00:12:29.307 "copy": false, 00:12:29.307 "nvme_iov_md": false 00:12:29.307 }, 00:12:29.307 "memory_domains": [ 00:12:29.307 { 00:12:29.307 "dma_device_id": "system", 00:12:29.307 "dma_device_type": 1 00:12:29.307 }, 00:12:29.307 { 00:12:29.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.307 "dma_device_type": 2 00:12:29.307 }, 00:12:29.307 { 00:12:29.307 "dma_device_id": "system", 00:12:29.307 "dma_device_type": 1 00:12:29.307 }, 00:12:29.307 { 00:12:29.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.307 "dma_device_type": 2 00:12:29.307 }, 00:12:29.307 { 00:12:29.307 "dma_device_id": "system", 00:12:29.307 "dma_device_type": 1 00:12:29.307 }, 00:12:29.307 { 00:12:29.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.307 "dma_device_type": 2 00:12:29.307 }, 00:12:29.307 { 00:12:29.307 "dma_device_id": "system", 00:12:29.307 "dma_device_type": 1 00:12:29.307 }, 00:12:29.307 { 00:12:29.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.307 "dma_device_type": 2 00:12:29.307 } 00:12:29.307 ], 00:12:29.307 "driver_specific": { 00:12:29.307 "raid": { 00:12:29.307 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:29.307 "strip_size_kb": 0, 00:12:29.307 "state": "online", 00:12:29.307 "raid_level": "raid1", 00:12:29.307 "superblock": true, 00:12:29.307 "num_base_bdevs": 4, 00:12:29.307 "num_base_bdevs_discovered": 4, 00:12:29.307 "num_base_bdevs_operational": 4, 00:12:29.307 "base_bdevs_list": [ 00:12:29.307 { 00:12:29.307 "name": "pt1", 00:12:29.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:29.307 "is_configured": true, 00:12:29.307 "data_offset": 2048, 00:12:29.307 "data_size": 63488 00:12:29.307 }, 00:12:29.307 { 00:12:29.307 "name": "pt2", 00:12:29.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.307 "is_configured": true, 00:12:29.307 "data_offset": 2048, 00:12:29.307 "data_size": 63488 00:12:29.307 }, 00:12:29.307 { 00:12:29.307 "name": "pt3", 00:12:29.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.307 "is_configured": true, 00:12:29.307 "data_offset": 2048, 00:12:29.307 "data_size": 63488 00:12:29.307 }, 00:12:29.307 { 00:12:29.307 "name": "pt4", 00:12:29.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.307 "is_configured": true, 00:12:29.307 "data_offset": 2048, 00:12:29.307 "data_size": 63488 00:12:29.307 } 00:12:29.307 ] 00:12:29.307 } 00:12:29.307 } 00:12:29.307 }' 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:29.307 pt2 00:12:29.307 pt3 00:12:29.307 pt4' 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.307 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.567 09:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.567 [2024-10-21 09:57:06.071128] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e711900a-f9cd-4093-b530-128d952278ca '!=' e711900a-f9cd-4093-b530-128d952278ca ']' 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.567 [2024-10-21 09:57:06.102888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.567 "name": "raid_bdev1", 00:12:29.567 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:29.567 "strip_size_kb": 0, 00:12:29.567 "state": "online", 00:12:29.567 "raid_level": "raid1", 00:12:29.567 "superblock": true, 00:12:29.567 "num_base_bdevs": 4, 00:12:29.567 "num_base_bdevs_discovered": 3, 00:12:29.567 "num_base_bdevs_operational": 3, 00:12:29.567 "base_bdevs_list": [ 00:12:29.567 { 00:12:29.567 "name": null, 00:12:29.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.567 "is_configured": false, 00:12:29.567 "data_offset": 0, 00:12:29.567 "data_size": 63488 00:12:29.567 }, 00:12:29.567 { 00:12:29.567 "name": "pt2", 00:12:29.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.567 "is_configured": true, 00:12:29.567 "data_offset": 2048, 00:12:29.567 "data_size": 63488 00:12:29.567 }, 00:12:29.567 { 00:12:29.567 "name": "pt3", 00:12:29.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.567 "is_configured": true, 00:12:29.567 "data_offset": 2048, 00:12:29.567 "data_size": 63488 00:12:29.567 }, 00:12:29.567 { 00:12:29.567 "name": "pt4", 00:12:29.567 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.567 "is_configured": true, 00:12:29.567 "data_offset": 2048, 00:12:29.567 "data_size": 63488 00:12:29.567 } 00:12:29.567 ] 00:12:29.567 }' 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.567 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.136 [2024-10-21 09:57:06.554796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.136 [2024-10-21 09:57:06.554947] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.136 [2024-10-21 09:57:06.555082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.136 [2024-10-21 09:57:06.555192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.136 [2024-10-21 09:57:06.555205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.136 [2024-10-21 09:57:06.638810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:30.136 [2024-10-21 09:57:06.638992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.136 [2024-10-21 09:57:06.639047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:30.136 [2024-10-21 09:57:06.639083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.136 [2024-10-21 09:57:06.641900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.136 [2024-10-21 09:57:06.642026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:30.136 [2024-10-21 09:57:06.642173] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:30.136 [2024-10-21 09:57:06.642267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.136 pt2 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.136 "name": "raid_bdev1", 00:12:30.136 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:30.136 "strip_size_kb": 0, 00:12:30.136 "state": "configuring", 00:12:30.136 "raid_level": "raid1", 00:12:30.136 "superblock": true, 00:12:30.136 "num_base_bdevs": 4, 00:12:30.136 "num_base_bdevs_discovered": 1, 00:12:30.136 "num_base_bdevs_operational": 3, 00:12:30.136 "base_bdevs_list": [ 00:12:30.136 { 00:12:30.136 "name": null, 00:12:30.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.136 "is_configured": false, 00:12:30.136 "data_offset": 2048, 00:12:30.136 "data_size": 63488 00:12:30.136 }, 00:12:30.136 { 00:12:30.136 "name": "pt2", 00:12:30.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.136 "is_configured": true, 00:12:30.136 "data_offset": 2048, 00:12:30.136 "data_size": 63488 00:12:30.136 }, 00:12:30.136 { 00:12:30.136 "name": null, 00:12:30.136 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.136 "is_configured": false, 00:12:30.136 "data_offset": 2048, 00:12:30.136 "data_size": 63488 00:12:30.136 }, 00:12:30.136 { 00:12:30.136 "name": null, 00:12:30.136 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.136 "is_configured": false, 00:12:30.136 "data_offset": 2048, 00:12:30.136 "data_size": 63488 00:12:30.136 } 00:12:30.136 ] 00:12:30.136 }' 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.136 09:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.705 [2024-10-21 09:57:07.098803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:30.705 [2024-10-21 09:57:07.099007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.705 [2024-10-21 09:57:07.099056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:30.705 [2024-10-21 09:57:07.099088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.705 [2024-10-21 09:57:07.099736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.705 [2024-10-21 09:57:07.099809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:30.705 [2024-10-21 09:57:07.099962] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:30.705 [2024-10-21 09:57:07.100023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:30.705 pt3 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.705 "name": "raid_bdev1", 00:12:30.705 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:30.705 "strip_size_kb": 0, 00:12:30.705 "state": "configuring", 00:12:30.705 "raid_level": "raid1", 00:12:30.705 "superblock": true, 00:12:30.705 "num_base_bdevs": 4, 00:12:30.705 "num_base_bdevs_discovered": 2, 00:12:30.705 "num_base_bdevs_operational": 3, 00:12:30.705 "base_bdevs_list": [ 00:12:30.705 { 00:12:30.705 "name": null, 00:12:30.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.705 "is_configured": false, 00:12:30.705 "data_offset": 2048, 00:12:30.705 "data_size": 63488 00:12:30.705 }, 00:12:30.705 { 00:12:30.705 "name": "pt2", 00:12:30.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.705 "is_configured": true, 00:12:30.705 "data_offset": 2048, 00:12:30.705 "data_size": 63488 00:12:30.705 }, 00:12:30.705 { 00:12:30.705 "name": "pt3", 00:12:30.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.705 "is_configured": true, 00:12:30.705 "data_offset": 2048, 00:12:30.705 "data_size": 63488 00:12:30.705 }, 00:12:30.705 { 00:12:30.705 "name": null, 00:12:30.705 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.705 "is_configured": false, 00:12:30.705 "data_offset": 2048, 00:12:30.705 "data_size": 63488 00:12:30.705 } 00:12:30.705 ] 00:12:30.705 }' 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.705 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.272 [2024-10-21 09:57:07.594835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:31.272 [2024-10-21 09:57:07.595026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.272 [2024-10-21 09:57:07.595079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:31.272 [2024-10-21 09:57:07.595115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.272 [2024-10-21 09:57:07.595760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.272 [2024-10-21 09:57:07.595831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:31.272 [2024-10-21 09:57:07.595981] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:31.272 [2024-10-21 09:57:07.596050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:31.272 [2024-10-21 09:57:07.596251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:31.272 [2024-10-21 09:57:07.596294] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.272 [2024-10-21 09:57:07.596634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:31.272 [2024-10-21 09:57:07.596876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:31.272 [2024-10-21 09:57:07.596928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:31.272 [2024-10-21 09:57:07.597158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.272 pt4 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.272 "name": "raid_bdev1", 00:12:31.272 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:31.272 "strip_size_kb": 0, 00:12:31.272 "state": "online", 00:12:31.272 "raid_level": "raid1", 00:12:31.272 "superblock": true, 00:12:31.272 "num_base_bdevs": 4, 00:12:31.272 "num_base_bdevs_discovered": 3, 00:12:31.272 "num_base_bdevs_operational": 3, 00:12:31.272 "base_bdevs_list": [ 00:12:31.272 { 00:12:31.272 "name": null, 00:12:31.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.272 "is_configured": false, 00:12:31.272 "data_offset": 2048, 00:12:31.272 "data_size": 63488 00:12:31.272 }, 00:12:31.272 { 00:12:31.272 "name": "pt2", 00:12:31.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.272 "is_configured": true, 00:12:31.272 "data_offset": 2048, 00:12:31.272 "data_size": 63488 00:12:31.272 }, 00:12:31.272 { 00:12:31.272 "name": "pt3", 00:12:31.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.272 "is_configured": true, 00:12:31.272 "data_offset": 2048, 00:12:31.272 "data_size": 63488 00:12:31.272 }, 00:12:31.272 { 00:12:31.272 "name": "pt4", 00:12:31.272 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:31.272 "is_configured": true, 00:12:31.272 "data_offset": 2048, 00:12:31.272 "data_size": 63488 00:12:31.272 } 00:12:31.272 ] 00:12:31.272 }' 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.272 09:57:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.530 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:31.530 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.530 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.530 [2024-10-21 09:57:08.058781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:31.530 [2024-10-21 09:57:08.058919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.530 [2024-10-21 09:57:08.059072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.530 [2024-10-21 09:57:08.059221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.531 [2024-10-21 09:57:08.059290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.531 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.531 [2024-10-21 09:57:08.122763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:31.531 [2024-10-21 09:57:08.122949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.531 [2024-10-21 09:57:08.122977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:31.531 [2024-10-21 09:57:08.122991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.789 [2024-10-21 09:57:08.125862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.789 [2024-10-21 09:57:08.125912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:31.789 [2024-10-21 09:57:08.126030] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:31.789 [2024-10-21 09:57:08.126083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:31.789 [2024-10-21 09:57:08.126260] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:31.789 [2024-10-21 09:57:08.126275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:31.789 [2024-10-21 09:57:08.126293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state configuring 00:12:31.789 [2024-10-21 09:57:08.126369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.789 [2024-10-21 09:57:08.126495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:31.789 pt1 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.789 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.789 "name": "raid_bdev1", 00:12:31.789 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:31.789 "strip_size_kb": 0, 00:12:31.789 "state": "configuring", 00:12:31.790 "raid_level": "raid1", 00:12:31.790 "superblock": true, 00:12:31.790 "num_base_bdevs": 4, 00:12:31.790 "num_base_bdevs_discovered": 2, 00:12:31.790 "num_base_bdevs_operational": 3, 00:12:31.790 "base_bdevs_list": [ 00:12:31.790 { 00:12:31.790 "name": null, 00:12:31.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.790 "is_configured": false, 00:12:31.790 "data_offset": 2048, 00:12:31.790 "data_size": 63488 00:12:31.790 }, 00:12:31.790 { 00:12:31.790 "name": "pt2", 00:12:31.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.790 "is_configured": true, 00:12:31.790 "data_offset": 2048, 00:12:31.790 "data_size": 63488 00:12:31.790 }, 00:12:31.790 { 00:12:31.790 "name": "pt3", 00:12:31.790 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.790 "is_configured": true, 00:12:31.790 "data_offset": 2048, 00:12:31.790 "data_size": 63488 00:12:31.790 }, 00:12:31.790 { 00:12:31.790 "name": null, 00:12:31.790 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:31.790 "is_configured": false, 00:12:31.790 "data_offset": 2048, 00:12:31.790 "data_size": 63488 00:12:31.790 } 00:12:31.790 ] 00:12:31.790 }' 00:12:31.790 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.790 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.048 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:32.048 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:32.048 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.048 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.048 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.307 [2024-10-21 09:57:08.670825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:32.307 [2024-10-21 09:57:08.671027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.307 [2024-10-21 09:57:08.671077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:32.307 [2024-10-21 09:57:08.671146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.307 [2024-10-21 09:57:08.671817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.307 [2024-10-21 09:57:08.671894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:32.307 [2024-10-21 09:57:08.672021] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:32.307 [2024-10-21 09:57:08.672052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:32.307 [2024-10-21 09:57:08.672241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:32.307 [2024-10-21 09:57:08.672251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.307 [2024-10-21 09:57:08.672562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:32.307 [2024-10-21 09:57:08.672768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:32.307 [2024-10-21 09:57:08.672783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:12:32.307 [2024-10-21 09:57:08.672981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.307 pt4 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.307 "name": "raid_bdev1", 00:12:32.307 "uuid": "e711900a-f9cd-4093-b530-128d952278ca", 00:12:32.307 "strip_size_kb": 0, 00:12:32.307 "state": "online", 00:12:32.307 "raid_level": "raid1", 00:12:32.307 "superblock": true, 00:12:32.307 "num_base_bdevs": 4, 00:12:32.307 "num_base_bdevs_discovered": 3, 00:12:32.307 "num_base_bdevs_operational": 3, 00:12:32.307 "base_bdevs_list": [ 00:12:32.307 { 00:12:32.307 "name": null, 00:12:32.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.307 "is_configured": false, 00:12:32.307 "data_offset": 2048, 00:12:32.307 "data_size": 63488 00:12:32.307 }, 00:12:32.307 { 00:12:32.307 "name": "pt2", 00:12:32.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.307 "is_configured": true, 00:12:32.307 "data_offset": 2048, 00:12:32.307 "data_size": 63488 00:12:32.307 }, 00:12:32.307 { 00:12:32.307 "name": "pt3", 00:12:32.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.307 "is_configured": true, 00:12:32.307 "data_offset": 2048, 00:12:32.307 "data_size": 63488 00:12:32.307 }, 00:12:32.307 { 00:12:32.307 "name": "pt4", 00:12:32.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.307 "is_configured": true, 00:12:32.307 "data_offset": 2048, 00:12:32.307 "data_size": 63488 00:12:32.307 } 00:12:32.307 ] 00:12:32.307 }' 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.307 09:57:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.565 09:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:32.565 09:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:32.565 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.565 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.565 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:32.823 [2024-10-21 09:57:09.171135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e711900a-f9cd-4093-b530-128d952278ca '!=' e711900a-f9cd-4093-b530-128d952278ca ']' 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74126 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74126 ']' 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74126 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74126 00:12:32.823 killing process with pid 74126 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74126' 00:12:32.823 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74126 00:12:32.824 09:57:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74126 00:12:32.824 [2024-10-21 09:57:09.256943] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.824 [2024-10-21 09:57:09.257097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.824 [2024-10-21 09:57:09.257209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.824 [2024-10-21 09:57:09.257224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:12:33.389 [2024-10-21 09:57:09.733800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.762 09:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:34.762 00:12:34.762 real 0m9.230s 00:12:34.762 user 0m14.266s 00:12:34.762 sys 0m1.788s 00:12:34.762 09:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.762 09:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.762 ************************************ 00:12:34.762 END TEST raid_superblock_test 00:12:34.762 ************************************ 00:12:34.763 09:57:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:34.763 09:57:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:34.763 09:57:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.763 09:57:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.763 ************************************ 00:12:34.763 START TEST raid_read_error_test 00:12:34.763 ************************************ 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6b1tNuuk6o 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74624 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74624 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74624 ']' 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.763 09:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.763 [2024-10-21 09:57:11.262713] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:12:34.763 [2024-10-21 09:57:11.262981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74624 ] 00:12:35.021 [2024-10-21 09:57:11.433919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.021 [2024-10-21 09:57:11.583667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.282 [2024-10-21 09:57:11.858329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.282 [2024-10-21 09:57:11.858432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 BaseBdev1_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 true 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 [2024-10-21 09:57:12.214438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:35.858 [2024-10-21 09:57:12.214531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.858 [2024-10-21 09:57:12.214557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:35.858 [2024-10-21 09:57:12.214623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.858 [2024-10-21 09:57:12.217448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.858 [2024-10-21 09:57:12.217611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.858 BaseBdev1 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 BaseBdev2_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 true 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 [2024-10-21 09:57:12.289443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:35.858 [2024-10-21 09:57:12.289551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.858 [2024-10-21 09:57:12.289718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:35.858 [2024-10-21 09:57:12.289739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.858 [2024-10-21 09:57:12.292641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.858 [2024-10-21 09:57:12.292695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:35.858 BaseBdev2 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 BaseBdev3_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 true 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 [2024-10-21 09:57:12.379870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:35.858 [2024-10-21 09:57:12.379975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.858 [2024-10-21 09:57:12.380005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:35.858 [2024-10-21 09:57:12.380018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.858 [2024-10-21 09:57:12.382769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.858 [2024-10-21 09:57:12.382948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:35.858 BaseBdev3 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.858 BaseBdev4_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.858 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.117 true 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.117 [2024-10-21 09:57:12.461738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:36.117 [2024-10-21 09:57:12.461843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.117 [2024-10-21 09:57:12.461874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:36.117 [2024-10-21 09:57:12.461888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.117 [2024-10-21 09:57:12.464795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.117 [2024-10-21 09:57:12.464855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:36.117 BaseBdev4 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.117 [2024-10-21 09:57:12.473856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.117 [2024-10-21 09:57:12.476540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.117 [2024-10-21 09:57:12.476780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.117 [2024-10-21 09:57:12.476866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:36.117 [2024-10-21 09:57:12.477199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:36.117 [2024-10-21 09:57:12.477221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.117 [2024-10-21 09:57:12.477616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:36.117 [2024-10-21 09:57:12.477866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:36.117 [2024-10-21 09:57:12.477880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:12:36.117 [2024-10-21 09:57:12.478180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.117 "name": "raid_bdev1", 00:12:36.117 "uuid": "3e2311e3-b604-4b92-8080-761c636b50cd", 00:12:36.117 "strip_size_kb": 0, 00:12:36.117 "state": "online", 00:12:36.117 "raid_level": "raid1", 00:12:36.117 "superblock": true, 00:12:36.117 "num_base_bdevs": 4, 00:12:36.117 "num_base_bdevs_discovered": 4, 00:12:36.117 "num_base_bdevs_operational": 4, 00:12:36.117 "base_bdevs_list": [ 00:12:36.117 { 00:12:36.117 "name": "BaseBdev1", 00:12:36.117 "uuid": "e668f908-9e3f-52e5-a7c8-56f4b81a6747", 00:12:36.117 "is_configured": true, 00:12:36.117 "data_offset": 2048, 00:12:36.117 "data_size": 63488 00:12:36.117 }, 00:12:36.117 { 00:12:36.117 "name": "BaseBdev2", 00:12:36.117 "uuid": "fa00eba7-4d88-57cc-a915-818005dd5b89", 00:12:36.117 "is_configured": true, 00:12:36.117 "data_offset": 2048, 00:12:36.117 "data_size": 63488 00:12:36.117 }, 00:12:36.117 { 00:12:36.117 "name": "BaseBdev3", 00:12:36.117 "uuid": "aae35744-6b22-55df-9f0a-44ff1e119852", 00:12:36.117 "is_configured": true, 00:12:36.117 "data_offset": 2048, 00:12:36.117 "data_size": 63488 00:12:36.117 }, 00:12:36.117 { 00:12:36.117 "name": "BaseBdev4", 00:12:36.117 "uuid": "fdb729b0-8d5d-52db-9dfe-c27c0524e458", 00:12:36.117 "is_configured": true, 00:12:36.117 "data_offset": 2048, 00:12:36.117 "data_size": 63488 00:12:36.117 } 00:12:36.117 ] 00:12:36.117 }' 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.117 09:57:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.376 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:36.376 09:57:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:36.634 [2024-10-21 09:57:13.046834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.568 09:57:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.568 09:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.568 "name": "raid_bdev1", 00:12:37.568 "uuid": "3e2311e3-b604-4b92-8080-761c636b50cd", 00:12:37.568 "strip_size_kb": 0, 00:12:37.568 "state": "online", 00:12:37.568 "raid_level": "raid1", 00:12:37.568 "superblock": true, 00:12:37.568 "num_base_bdevs": 4, 00:12:37.568 "num_base_bdevs_discovered": 4, 00:12:37.568 "num_base_bdevs_operational": 4, 00:12:37.568 "base_bdevs_list": [ 00:12:37.568 { 00:12:37.568 "name": "BaseBdev1", 00:12:37.568 "uuid": "e668f908-9e3f-52e5-a7c8-56f4b81a6747", 00:12:37.568 "is_configured": true, 00:12:37.568 "data_offset": 2048, 00:12:37.568 "data_size": 63488 00:12:37.568 }, 00:12:37.568 { 00:12:37.568 "name": "BaseBdev2", 00:12:37.568 "uuid": "fa00eba7-4d88-57cc-a915-818005dd5b89", 00:12:37.568 "is_configured": true, 00:12:37.568 "data_offset": 2048, 00:12:37.568 "data_size": 63488 00:12:37.568 }, 00:12:37.568 { 00:12:37.568 "name": "BaseBdev3", 00:12:37.568 "uuid": "aae35744-6b22-55df-9f0a-44ff1e119852", 00:12:37.568 "is_configured": true, 00:12:37.568 "data_offset": 2048, 00:12:37.568 "data_size": 63488 00:12:37.568 }, 00:12:37.568 { 00:12:37.568 "name": "BaseBdev4", 00:12:37.568 "uuid": "fdb729b0-8d5d-52db-9dfe-c27c0524e458", 00:12:37.568 "is_configured": true, 00:12:37.568 "data_offset": 2048, 00:12:37.568 "data_size": 63488 00:12:37.568 } 00:12:37.568 ] 00:12:37.568 }' 00:12:37.568 09:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.568 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.135 [2024-10-21 09:57:14.438463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.135 [2024-10-21 09:57:14.438678] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.135 [2024-10-21 09:57:14.442191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.135 [2024-10-21 09:57:14.442394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.135 [2024-10-21 09:57:14.442755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.135 [2024-10-21 09:57:14.442875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:12:38.135 { 00:12:38.135 "results": [ 00:12:38.135 { 00:12:38.135 "job": "raid_bdev1", 00:12:38.135 "core_mask": "0x1", 00:12:38.135 "workload": "randrw", 00:12:38.135 "percentage": 50, 00:12:38.135 "status": "finished", 00:12:38.135 "queue_depth": 1, 00:12:38.135 "io_size": 131072, 00:12:38.135 "runtime": 1.392338, 00:12:38.135 "iops": 7026.311139967451, 00:12:38.135 "mibps": 878.2888924959313, 00:12:38.135 "io_failed": 0, 00:12:38.135 "io_timeout": 0, 00:12:38.135 "avg_latency_us": 139.44603699403697, 00:12:38.135 "min_latency_us": 25.3764192139738, 00:12:38.135 "max_latency_us": 1588.317903930131 00:12:38.135 } 00:12:38.135 ], 00:12:38.135 "core_count": 1 00:12:38.135 } 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74624 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74624 ']' 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74624 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74624 00:12:38.135 killing process with pid 74624 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74624' 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74624 00:12:38.135 09:57:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74624 00:12:38.135 [2024-10-21 09:57:14.490159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.394 [2024-10-21 09:57:14.879950] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.768 09:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6b1tNuuk6o 00:12:39.768 09:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:39.768 09:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:39.768 09:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:39.768 09:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:39.768 09:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:39.768 09:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:39.768 09:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:39.768 00:12:39.768 real 0m5.129s 00:12:39.768 user 0m5.934s 00:12:39.768 sys 0m0.735s 00:12:39.768 09:57:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.768 09:57:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.768 ************************************ 00:12:39.768 END TEST raid_read_error_test 00:12:39.768 ************************************ 00:12:39.768 09:57:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:39.768 09:57:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:39.768 09:57:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.768 09:57:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.768 ************************************ 00:12:39.768 START TEST raid_write_error_test 00:12:39.768 ************************************ 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.768 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eQ4JszbiYd 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74776 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74776 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74776 ']' 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.027 09:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.027 [2024-10-21 09:57:16.475859] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:12:40.027 [2024-10-21 09:57:16.476030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74776 ] 00:12:40.285 [2024-10-21 09:57:16.648827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.285 [2024-10-21 09:57:16.813926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.544 [2024-10-21 09:57:17.078541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.544 [2024-10-21 09:57:17.078657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.803 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.803 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:40.803 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:40.803 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.803 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.803 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.062 BaseBdev1_malloc 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.062 true 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.062 [2024-10-21 09:57:17.451985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:41.062 [2024-10-21 09:57:17.452083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.062 [2024-10-21 09:57:17.452107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:41.062 [2024-10-21 09:57:17.452124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.062 [2024-10-21 09:57:17.454822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.062 [2024-10-21 09:57:17.454968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.062 BaseBdev1 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.062 BaseBdev2_malloc 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.062 true 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.062 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.062 [2024-10-21 09:57:17.532471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:41.062 [2024-10-21 09:57:17.532563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.063 [2024-10-21 09:57:17.532600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:41.063 [2024-10-21 09:57:17.532613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.063 [2024-10-21 09:57:17.535225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.063 [2024-10-21 09:57:17.535274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:41.063 BaseBdev2 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.063 BaseBdev3_malloc 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.063 true 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.063 [2024-10-21 09:57:17.628125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:41.063 [2024-10-21 09:57:17.628222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.063 [2024-10-21 09:57:17.628250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:41.063 [2024-10-21 09:57:17.628263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.063 [2024-10-21 09:57:17.630869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.063 [2024-10-21 09:57:17.630919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:41.063 BaseBdev3 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.063 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.322 BaseBdev4_malloc 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.322 true 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.322 [2024-10-21 09:57:17.707319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:41.322 [2024-10-21 09:57:17.707522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.322 [2024-10-21 09:57:17.707582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:41.322 [2024-10-21 09:57:17.707623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.322 [2024-10-21 09:57:17.710479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.322 [2024-10-21 09:57:17.710603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:41.322 BaseBdev4 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.322 [2024-10-21 09:57:17.719375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.322 [2024-10-21 09:57:17.721737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.322 [2024-10-21 09:57:17.721871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.322 [2024-10-21 09:57:17.721962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:41.322 [2024-10-21 09:57:17.722282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:41.322 [2024-10-21 09:57:17.722344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:41.322 [2024-10-21 09:57:17.722740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:41.322 [2024-10-21 09:57:17.723029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:41.322 [2024-10-21 09:57:17.723077] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:12:41.322 [2024-10-21 09:57:17.723397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.322 "name": "raid_bdev1", 00:12:41.322 "uuid": "a183ee63-969f-4d13-a5a3-77cab37b86bf", 00:12:41.322 "strip_size_kb": 0, 00:12:41.322 "state": "online", 00:12:41.322 "raid_level": "raid1", 00:12:41.322 "superblock": true, 00:12:41.322 "num_base_bdevs": 4, 00:12:41.322 "num_base_bdevs_discovered": 4, 00:12:41.322 "num_base_bdevs_operational": 4, 00:12:41.322 "base_bdevs_list": [ 00:12:41.322 { 00:12:41.322 "name": "BaseBdev1", 00:12:41.322 "uuid": "1fd3f1dc-c3a9-5409-bdbe-bc5c61389e91", 00:12:41.322 "is_configured": true, 00:12:41.322 "data_offset": 2048, 00:12:41.322 "data_size": 63488 00:12:41.322 }, 00:12:41.322 { 00:12:41.322 "name": "BaseBdev2", 00:12:41.322 "uuid": "e8e169b0-3ff5-5d41-a89b-e1ceaff3aa94", 00:12:41.322 "is_configured": true, 00:12:41.322 "data_offset": 2048, 00:12:41.322 "data_size": 63488 00:12:41.322 }, 00:12:41.322 { 00:12:41.322 "name": "BaseBdev3", 00:12:41.322 "uuid": "a6075c4b-fb5f-54fc-9a16-7e905cb493b2", 00:12:41.322 "is_configured": true, 00:12:41.322 "data_offset": 2048, 00:12:41.322 "data_size": 63488 00:12:41.322 }, 00:12:41.322 { 00:12:41.322 "name": "BaseBdev4", 00:12:41.322 "uuid": "d1332395-2970-57de-87b9-5fc9575baf1d", 00:12:41.322 "is_configured": true, 00:12:41.322 "data_offset": 2048, 00:12:41.322 "data_size": 63488 00:12:41.322 } 00:12:41.322 ] 00:12:41.322 }' 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.322 09:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.889 09:57:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:41.889 09:57:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:41.889 [2024-10-21 09:57:18.296692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.823 [2024-10-21 09:57:19.208494] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:42.823 [2024-10-21 09:57:19.208680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.823 [2024-10-21 09:57:19.208976] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.823 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.823 "name": "raid_bdev1", 00:12:42.823 "uuid": "a183ee63-969f-4d13-a5a3-77cab37b86bf", 00:12:42.823 "strip_size_kb": 0, 00:12:42.823 "state": "online", 00:12:42.823 "raid_level": "raid1", 00:12:42.823 "superblock": true, 00:12:42.823 "num_base_bdevs": 4, 00:12:42.823 "num_base_bdevs_discovered": 3, 00:12:42.823 "num_base_bdevs_operational": 3, 00:12:42.823 "base_bdevs_list": [ 00:12:42.823 { 00:12:42.823 "name": null, 00:12:42.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.823 "is_configured": false, 00:12:42.823 "data_offset": 0, 00:12:42.823 "data_size": 63488 00:12:42.823 }, 00:12:42.823 { 00:12:42.823 "name": "BaseBdev2", 00:12:42.823 "uuid": "e8e169b0-3ff5-5d41-a89b-e1ceaff3aa94", 00:12:42.823 "is_configured": true, 00:12:42.823 "data_offset": 2048, 00:12:42.823 "data_size": 63488 00:12:42.823 }, 00:12:42.823 { 00:12:42.823 "name": "BaseBdev3", 00:12:42.823 "uuid": "a6075c4b-fb5f-54fc-9a16-7e905cb493b2", 00:12:42.824 "is_configured": true, 00:12:42.824 "data_offset": 2048, 00:12:42.824 "data_size": 63488 00:12:42.824 }, 00:12:42.824 { 00:12:42.824 "name": "BaseBdev4", 00:12:42.824 "uuid": "d1332395-2970-57de-87b9-5fc9575baf1d", 00:12:42.824 "is_configured": true, 00:12:42.824 "data_offset": 2048, 00:12:42.824 "data_size": 63488 00:12:42.824 } 00:12:42.824 ] 00:12:42.824 }' 00:12:42.824 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.824 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.083 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:43.083 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.083 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.342 [2024-10-21 09:57:19.680020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.342 [2024-10-21 09:57:19.680064] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.342 [2024-10-21 09:57:19.683259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.342 [2024-10-21 09:57:19.683356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.342 [2024-10-21 09:57:19.683519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.342 [2024-10-21 09:57:19.683588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:12:43.342 { 00:12:43.342 "results": [ 00:12:43.342 { 00:12:43.342 "job": "raid_bdev1", 00:12:43.342 "core_mask": "0x1", 00:12:43.342 "workload": "randrw", 00:12:43.342 "percentage": 50, 00:12:43.342 "status": "finished", 00:12:43.342 "queue_depth": 1, 00:12:43.342 "io_size": 131072, 00:12:43.342 "runtime": 1.38348, 00:12:43.342 "iops": 7792.667765345361, 00:12:43.342 "mibps": 974.0834706681701, 00:12:43.342 "io_failed": 0, 00:12:43.342 "io_timeout": 0, 00:12:43.342 "avg_latency_us": 125.24698659172758, 00:12:43.342 "min_latency_us": 25.4882096069869, 00:12:43.342 "max_latency_us": 1674.172925764192 00:12:43.342 } 00:12:43.342 ], 00:12:43.342 "core_count": 1 00:12:43.342 } 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74776 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74776 ']' 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74776 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74776 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74776' 00:12:43.342 killing process with pid 74776 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74776 00:12:43.342 [2024-10-21 09:57:19.721726] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.342 09:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74776 00:12:43.601 [2024-10-21 09:57:20.119283] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.977 09:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:44.977 09:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eQ4JszbiYd 00:12:44.977 09:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:44.977 09:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:44.977 09:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:44.977 09:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:44.977 09:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:44.978 09:57:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:44.978 00:12:44.978 real 0m5.185s 00:12:44.978 user 0m5.969s 00:12:44.978 sys 0m0.756s 00:12:44.978 09:57:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.978 ************************************ 00:12:44.978 END TEST raid_write_error_test 00:12:44.978 ************************************ 00:12:44.978 09:57:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.235 09:57:21 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:45.235 09:57:21 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:45.235 09:57:21 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:45.235 09:57:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:45.235 09:57:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.235 09:57:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.235 ************************************ 00:12:45.235 START TEST raid_rebuild_test 00:12:45.235 ************************************ 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:45.235 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=74925 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 74925 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 74925 ']' 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.236 09:57:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.236 [2024-10-21 09:57:21.729060] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:12:45.236 [2024-10-21 09:57:21.729338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74925 ] 00:12:45.236 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:45.236 Zero copy mechanism will not be used. 00:12:45.494 [2024-10-21 09:57:21.898218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.494 [2024-10-21 09:57:22.052053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.752 [2024-10-21 09:57:22.335315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.752 [2024-10-21 09:57:22.335382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.011 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.011 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:46.011 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:46.011 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:46.011 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.011 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.270 BaseBdev1_malloc 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.270 [2024-10-21 09:57:22.665783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:46.270 [2024-10-21 09:57:22.665914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.270 [2024-10-21 09:57:22.665953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:12:46.270 [2024-10-21 09:57:22.665969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.270 [2024-10-21 09:57:22.668931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.270 [2024-10-21 09:57:22.668981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:46.270 BaseBdev1 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.270 BaseBdev2_malloc 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.270 [2024-10-21 09:57:22.735065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:46.270 [2024-10-21 09:57:22.735146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.270 [2024-10-21 09:57:22.735174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:12:46.270 [2024-10-21 09:57:22.735188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.270 [2024-10-21 09:57:22.737963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.270 [2024-10-21 09:57:22.738010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:46.270 BaseBdev2 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.270 spare_malloc 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.270 spare_delay 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.270 [2024-10-21 09:57:22.829617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:46.270 [2024-10-21 09:57:22.829688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.270 [2024-10-21 09:57:22.829715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:12:46.270 [2024-10-21 09:57:22.829729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.270 [2024-10-21 09:57:22.832458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.270 [2024-10-21 09:57:22.832503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:46.270 spare 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.270 [2024-10-21 09:57:22.841636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.270 [2024-10-21 09:57:22.843950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.270 [2024-10-21 09:57:22.844123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:12:46.270 [2024-10-21 09:57:22.844140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:46.270 [2024-10-21 09:57:22.844472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:46.270 [2024-10-21 09:57:22.844674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:12:46.270 [2024-10-21 09:57:22.844686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:12:46.270 [2024-10-21 09:57:22.844886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.270 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.528 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.528 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.528 "name": "raid_bdev1", 00:12:46.528 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:46.528 "strip_size_kb": 0, 00:12:46.528 "state": "online", 00:12:46.528 "raid_level": "raid1", 00:12:46.528 "superblock": false, 00:12:46.528 "num_base_bdevs": 2, 00:12:46.528 "num_base_bdevs_discovered": 2, 00:12:46.528 "num_base_bdevs_operational": 2, 00:12:46.528 "base_bdevs_list": [ 00:12:46.528 { 00:12:46.528 "name": "BaseBdev1", 00:12:46.528 "uuid": "a60b3f9e-74b1-5bc4-88c6-cf56802c9f07", 00:12:46.528 "is_configured": true, 00:12:46.528 "data_offset": 0, 00:12:46.528 "data_size": 65536 00:12:46.528 }, 00:12:46.528 { 00:12:46.528 "name": "BaseBdev2", 00:12:46.528 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:46.528 "is_configured": true, 00:12:46.528 "data_offset": 0, 00:12:46.528 "data_size": 65536 00:12:46.528 } 00:12:46.528 ] 00:12:46.528 }' 00:12:46.528 09:57:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.528 09:57:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.786 [2024-10-21 09:57:23.317240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:46.786 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.047 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:47.047 [2024-10-21 09:57:23.628494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:47.307 /dev/nbd0 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.307 1+0 records in 00:12:47.307 1+0 records out 00:12:47.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384581 s, 10.7 MB/s 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:47.307 09:57:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:52.579 65536+0 records in 00:12:52.579 65536+0 records out 00:12:52.579 33554432 bytes (34 MB, 32 MiB) copied, 4.85963 s, 6.9 MB/s 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.579 [2024-10-21 09:57:28.798887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.579 [2024-10-21 09:57:28.819573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.579 "name": "raid_bdev1", 00:12:52.579 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:52.579 "strip_size_kb": 0, 00:12:52.579 "state": "online", 00:12:52.579 "raid_level": "raid1", 00:12:52.579 "superblock": false, 00:12:52.579 "num_base_bdevs": 2, 00:12:52.579 "num_base_bdevs_discovered": 1, 00:12:52.579 "num_base_bdevs_operational": 1, 00:12:52.579 "base_bdevs_list": [ 00:12:52.579 { 00:12:52.579 "name": null, 00:12:52.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.579 "is_configured": false, 00:12:52.579 "data_offset": 0, 00:12:52.579 "data_size": 65536 00:12:52.579 }, 00:12:52.579 { 00:12:52.579 "name": "BaseBdev2", 00:12:52.579 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:52.579 "is_configured": true, 00:12:52.579 "data_offset": 0, 00:12:52.579 "data_size": 65536 00:12:52.579 } 00:12:52.579 ] 00:12:52.579 }' 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.579 09:57:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.839 09:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:52.839 09:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.839 09:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.839 [2024-10-21 09:57:29.318853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.839 [2024-10-21 09:57:29.342797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09960 00:12:52.839 09:57:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.839 09:57:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:52.839 [2024-10-21 09:57:29.345593] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:53.779 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.779 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.779 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.779 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.779 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.779 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.779 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.779 09:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.779 09:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.038 "name": "raid_bdev1", 00:12:54.038 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:54.038 "strip_size_kb": 0, 00:12:54.038 "state": "online", 00:12:54.038 "raid_level": "raid1", 00:12:54.038 "superblock": false, 00:12:54.038 "num_base_bdevs": 2, 00:12:54.038 "num_base_bdevs_discovered": 2, 00:12:54.038 "num_base_bdevs_operational": 2, 00:12:54.038 "process": { 00:12:54.038 "type": "rebuild", 00:12:54.038 "target": "spare", 00:12:54.038 "progress": { 00:12:54.038 "blocks": 20480, 00:12:54.038 "percent": 31 00:12:54.038 } 00:12:54.038 }, 00:12:54.038 "base_bdevs_list": [ 00:12:54.038 { 00:12:54.038 "name": "spare", 00:12:54.038 "uuid": "ac76e394-d508-5d0c-b391-b9e6f84c2621", 00:12:54.038 "is_configured": true, 00:12:54.038 "data_offset": 0, 00:12:54.038 "data_size": 65536 00:12:54.038 }, 00:12:54.038 { 00:12:54.038 "name": "BaseBdev2", 00:12:54.038 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:54.038 "is_configured": true, 00:12:54.038 "data_offset": 0, 00:12:54.038 "data_size": 65536 00:12:54.038 } 00:12:54.038 ] 00:12:54.038 }' 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.038 [2024-10-21 09:57:30.508379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.038 [2024-10-21 09:57:30.556228] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:54.038 [2024-10-21 09:57:30.556342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.038 [2024-10-21 09:57:30.556359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.038 [2024-10-21 09:57:30.556369] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.038 09:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.296 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.296 "name": "raid_bdev1", 00:12:54.296 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:54.296 "strip_size_kb": 0, 00:12:54.296 "state": "online", 00:12:54.296 "raid_level": "raid1", 00:12:54.296 "superblock": false, 00:12:54.296 "num_base_bdevs": 2, 00:12:54.296 "num_base_bdevs_discovered": 1, 00:12:54.296 "num_base_bdevs_operational": 1, 00:12:54.296 "base_bdevs_list": [ 00:12:54.296 { 00:12:54.296 "name": null, 00:12:54.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.296 "is_configured": false, 00:12:54.296 "data_offset": 0, 00:12:54.296 "data_size": 65536 00:12:54.296 }, 00:12:54.296 { 00:12:54.296 "name": "BaseBdev2", 00:12:54.296 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:54.296 "is_configured": true, 00:12:54.296 "data_offset": 0, 00:12:54.296 "data_size": 65536 00:12:54.296 } 00:12:54.296 ] 00:12:54.296 }' 00:12:54.296 09:57:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.296 09:57:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.559 "name": "raid_bdev1", 00:12:54.559 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:54.559 "strip_size_kb": 0, 00:12:54.559 "state": "online", 00:12:54.559 "raid_level": "raid1", 00:12:54.559 "superblock": false, 00:12:54.559 "num_base_bdevs": 2, 00:12:54.559 "num_base_bdevs_discovered": 1, 00:12:54.559 "num_base_bdevs_operational": 1, 00:12:54.559 "base_bdevs_list": [ 00:12:54.559 { 00:12:54.559 "name": null, 00:12:54.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.559 "is_configured": false, 00:12:54.559 "data_offset": 0, 00:12:54.559 "data_size": 65536 00:12:54.559 }, 00:12:54.559 { 00:12:54.559 "name": "BaseBdev2", 00:12:54.559 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:54.559 "is_configured": true, 00:12:54.559 "data_offset": 0, 00:12:54.559 "data_size": 65536 00:12:54.559 } 00:12:54.559 ] 00:12:54.559 }' 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:54.559 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.818 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:54.818 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:54.818 09:57:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.818 09:57:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.818 [2024-10-21 09:57:31.196168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:54.818 [2024-10-21 09:57:31.217362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:12:54.818 09:57:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.818 09:57:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:54.818 [2024-10-21 09:57:31.219795] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.749 "name": "raid_bdev1", 00:12:55.749 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:55.749 "strip_size_kb": 0, 00:12:55.749 "state": "online", 00:12:55.749 "raid_level": "raid1", 00:12:55.749 "superblock": false, 00:12:55.749 "num_base_bdevs": 2, 00:12:55.749 "num_base_bdevs_discovered": 2, 00:12:55.749 "num_base_bdevs_operational": 2, 00:12:55.749 "process": { 00:12:55.749 "type": "rebuild", 00:12:55.749 "target": "spare", 00:12:55.749 "progress": { 00:12:55.749 "blocks": 20480, 00:12:55.749 "percent": 31 00:12:55.749 } 00:12:55.749 }, 00:12:55.749 "base_bdevs_list": [ 00:12:55.749 { 00:12:55.749 "name": "spare", 00:12:55.749 "uuid": "ac76e394-d508-5d0c-b391-b9e6f84c2621", 00:12:55.749 "is_configured": true, 00:12:55.749 "data_offset": 0, 00:12:55.749 "data_size": 65536 00:12:55.749 }, 00:12:55.749 { 00:12:55.749 "name": "BaseBdev2", 00:12:55.749 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:55.749 "is_configured": true, 00:12:55.749 "data_offset": 0, 00:12:55.749 "data_size": 65536 00:12:55.749 } 00:12:55.749 ] 00:12:55.749 }' 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.749 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.750 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.007 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.007 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:56.007 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:56.007 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=379 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.008 "name": "raid_bdev1", 00:12:56.008 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:56.008 "strip_size_kb": 0, 00:12:56.008 "state": "online", 00:12:56.008 "raid_level": "raid1", 00:12:56.008 "superblock": false, 00:12:56.008 "num_base_bdevs": 2, 00:12:56.008 "num_base_bdevs_discovered": 2, 00:12:56.008 "num_base_bdevs_operational": 2, 00:12:56.008 "process": { 00:12:56.008 "type": "rebuild", 00:12:56.008 "target": "spare", 00:12:56.008 "progress": { 00:12:56.008 "blocks": 22528, 00:12:56.008 "percent": 34 00:12:56.008 } 00:12:56.008 }, 00:12:56.008 "base_bdevs_list": [ 00:12:56.008 { 00:12:56.008 "name": "spare", 00:12:56.008 "uuid": "ac76e394-d508-5d0c-b391-b9e6f84c2621", 00:12:56.008 "is_configured": true, 00:12:56.008 "data_offset": 0, 00:12:56.008 "data_size": 65536 00:12:56.008 }, 00:12:56.008 { 00:12:56.008 "name": "BaseBdev2", 00:12:56.008 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:56.008 "is_configured": true, 00:12:56.008 "data_offset": 0, 00:12:56.008 "data_size": 65536 00:12:56.008 } 00:12:56.008 ] 00:12:56.008 }' 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.008 09:57:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.944 09:57:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.203 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.204 "name": "raid_bdev1", 00:12:57.204 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:57.204 "strip_size_kb": 0, 00:12:57.204 "state": "online", 00:12:57.204 "raid_level": "raid1", 00:12:57.204 "superblock": false, 00:12:57.204 "num_base_bdevs": 2, 00:12:57.204 "num_base_bdevs_discovered": 2, 00:12:57.204 "num_base_bdevs_operational": 2, 00:12:57.204 "process": { 00:12:57.204 "type": "rebuild", 00:12:57.204 "target": "spare", 00:12:57.204 "progress": { 00:12:57.204 "blocks": 45056, 00:12:57.204 "percent": 68 00:12:57.204 } 00:12:57.204 }, 00:12:57.204 "base_bdevs_list": [ 00:12:57.204 { 00:12:57.204 "name": "spare", 00:12:57.204 "uuid": "ac76e394-d508-5d0c-b391-b9e6f84c2621", 00:12:57.204 "is_configured": true, 00:12:57.204 "data_offset": 0, 00:12:57.204 "data_size": 65536 00:12:57.204 }, 00:12:57.204 { 00:12:57.204 "name": "BaseBdev2", 00:12:57.204 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:57.204 "is_configured": true, 00:12:57.204 "data_offset": 0, 00:12:57.204 "data_size": 65536 00:12:57.204 } 00:12:57.204 ] 00:12:57.204 }' 00:12:57.204 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.204 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.204 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.204 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.204 09:57:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:58.158 [2024-10-21 09:57:34.446800] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:58.158 [2024-10-21 09:57:34.446910] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:58.158 [2024-10-21 09:57:34.446970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.158 "name": "raid_bdev1", 00:12:58.158 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:58.158 "strip_size_kb": 0, 00:12:58.158 "state": "online", 00:12:58.158 "raid_level": "raid1", 00:12:58.158 "superblock": false, 00:12:58.158 "num_base_bdevs": 2, 00:12:58.158 "num_base_bdevs_discovered": 2, 00:12:58.158 "num_base_bdevs_operational": 2, 00:12:58.158 "base_bdevs_list": [ 00:12:58.158 { 00:12:58.158 "name": "spare", 00:12:58.158 "uuid": "ac76e394-d508-5d0c-b391-b9e6f84c2621", 00:12:58.158 "is_configured": true, 00:12:58.158 "data_offset": 0, 00:12:58.158 "data_size": 65536 00:12:58.158 }, 00:12:58.158 { 00:12:58.158 "name": "BaseBdev2", 00:12:58.158 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:58.158 "is_configured": true, 00:12:58.158 "data_offset": 0, 00:12:58.158 "data_size": 65536 00:12:58.158 } 00:12:58.158 ] 00:12:58.158 }' 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:58.158 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.420 "name": "raid_bdev1", 00:12:58.420 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:58.420 "strip_size_kb": 0, 00:12:58.420 "state": "online", 00:12:58.420 "raid_level": "raid1", 00:12:58.420 "superblock": false, 00:12:58.420 "num_base_bdevs": 2, 00:12:58.420 "num_base_bdevs_discovered": 2, 00:12:58.420 "num_base_bdevs_operational": 2, 00:12:58.420 "base_bdevs_list": [ 00:12:58.420 { 00:12:58.420 "name": "spare", 00:12:58.420 "uuid": "ac76e394-d508-5d0c-b391-b9e6f84c2621", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 }, 00:12:58.420 { 00:12:58.420 "name": "BaseBdev2", 00:12:58.420 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 } 00:12:58.420 ] 00:12:58.420 }' 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.420 "name": "raid_bdev1", 00:12:58.420 "uuid": "b735200d-ec0d-4b6b-946f-90bf1ba9d4c0", 00:12:58.420 "strip_size_kb": 0, 00:12:58.420 "state": "online", 00:12:58.420 "raid_level": "raid1", 00:12:58.420 "superblock": false, 00:12:58.420 "num_base_bdevs": 2, 00:12:58.420 "num_base_bdevs_discovered": 2, 00:12:58.420 "num_base_bdevs_operational": 2, 00:12:58.420 "base_bdevs_list": [ 00:12:58.420 { 00:12:58.420 "name": "spare", 00:12:58.420 "uuid": "ac76e394-d508-5d0c-b391-b9e6f84c2621", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 }, 00:12:58.420 { 00:12:58.420 "name": "BaseBdev2", 00:12:58.420 "uuid": "c9710b45-cbb9-5dfe-a4b0-f0022369cb1a", 00:12:58.420 "is_configured": true, 00:12:58.420 "data_offset": 0, 00:12:58.420 "data_size": 65536 00:12:58.420 } 00:12:58.420 ] 00:12:58.420 }' 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.420 09:57:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.989 [2024-10-21 09:57:35.418606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.989 [2024-10-21 09:57:35.418748] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.989 [2024-10-21 09:57:35.418889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.989 [2024-10-21 09:57:35.418990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.989 [2024-10-21 09:57:35.419064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:58.989 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.990 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:58.990 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.990 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:58.990 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:59.249 /dev/nbd0 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.249 1+0 records in 00:12:59.249 1+0 records out 00:12:59.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616296 s, 6.6 MB/s 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:59.249 09:57:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:59.508 /dev/nbd1 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.508 1+0 records in 00:12:59.508 1+0 records out 00:12:59.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350292 s, 11.7 MB/s 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:59.508 09:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:59.766 09:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:59.766 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.766 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:59.766 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.766 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:59.766 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.766 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:00.044 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.044 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.044 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.044 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.044 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.044 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.044 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:00.044 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.044 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.044 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 74925 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 74925 ']' 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 74925 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74925 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74925' 00:13:00.307 killing process with pid 74925 00:13:00.307 Received shutdown signal, test time was about 60.000000 seconds 00:13:00.307 00:13:00.307 Latency(us) 00:13:00.307 [2024-10-21T09:57:36.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.307 [2024-10-21T09:57:36.902Z] =================================================================================================================== 00:13:00.307 [2024-10-21T09:57:36.902Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 74925 00:13:00.307 [2024-10-21 09:57:36.866404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.307 09:57:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 74925 00:13:00.876 [2024-10-21 09:57:37.197299] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.813 09:57:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:01.813 00:13:01.813 real 0m16.776s 00:13:01.813 user 0m18.878s 00:13:01.813 sys 0m3.522s 00:13:01.813 ************************************ 00:13:01.813 END TEST raid_rebuild_test 00:13:01.813 ************************************ 00:13:01.813 09:57:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.813 09:57:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.073 09:57:38 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:02.073 09:57:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:02.073 09:57:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.073 09:57:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.073 ************************************ 00:13:02.073 START TEST raid_rebuild_test_sb 00:13:02.073 ************************************ 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:02.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75354 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75354 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75354 ']' 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:02.073 09:57:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.073 [2024-10-21 09:57:38.568791] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:13:02.073 [2024-10-21 09:57:38.569116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:02.073 Zero copy mechanism will not be used. 00:13:02.073 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75354 ] 00:13:02.332 [2024-10-21 09:57:38.741996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.332 [2024-10-21 09:57:38.864623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.622 [2024-10-21 09:57:39.089137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.622 [2024-10-21 09:57:39.089265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.904 BaseBdev1_malloc 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.904 [2024-10-21 09:57:39.439785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:02.904 [2024-10-21 09:57:39.439890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.904 [2024-10-21 09:57:39.439915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:13:02.904 [2024-10-21 09:57:39.439927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.904 [2024-10-21 09:57:39.442074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.904 [2024-10-21 09:57:39.442212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:02.904 BaseBdev1 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.904 BaseBdev2_malloc 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.904 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.904 [2024-10-21 09:57:39.494664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:02.904 [2024-10-21 09:57:39.494818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.904 [2024-10-21 09:57:39.494843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:13:02.904 [2024-10-21 09:57:39.494854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.904 [2024-10-21 09:57:39.497001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.904 [2024-10-21 09:57:39.497047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:03.165 BaseBdev2 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 spare_malloc 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 spare_delay 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 [2024-10-21 09:57:39.575488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:03.165 [2024-10-21 09:57:39.575554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.165 [2024-10-21 09:57:39.575589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:13:03.165 [2024-10-21 09:57:39.575601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.165 [2024-10-21 09:57:39.577672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.165 [2024-10-21 09:57:39.577714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:03.165 spare 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 [2024-10-21 09:57:39.583516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.165 [2024-10-21 09:57:39.585333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.165 [2024-10-21 09:57:39.585504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:13:03.165 [2024-10-21 09:57:39.585519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:03.165 [2024-10-21 09:57:39.585786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:03.165 [2024-10-21 09:57:39.585953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:13:03.165 [2024-10-21 09:57:39.585962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:13:03.165 [2024-10-21 09:57:39.586100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.165 "name": "raid_bdev1", 00:13:03.165 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:03.165 "strip_size_kb": 0, 00:13:03.165 "state": "online", 00:13:03.165 "raid_level": "raid1", 00:13:03.165 "superblock": true, 00:13:03.165 "num_base_bdevs": 2, 00:13:03.165 "num_base_bdevs_discovered": 2, 00:13:03.165 "num_base_bdevs_operational": 2, 00:13:03.165 "base_bdevs_list": [ 00:13:03.165 { 00:13:03.165 "name": "BaseBdev1", 00:13:03.165 "uuid": "2bbe834b-8da2-5a12-a96a-73b59df415ce", 00:13:03.165 "is_configured": true, 00:13:03.165 "data_offset": 2048, 00:13:03.165 "data_size": 63488 00:13:03.165 }, 00:13:03.165 { 00:13:03.165 "name": "BaseBdev2", 00:13:03.165 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:03.165 "is_configured": true, 00:13:03.165 "data_offset": 2048, 00:13:03.165 "data_size": 63488 00:13:03.165 } 00:13:03.165 ] 00:13:03.165 }' 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.165 09:57:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.424 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:03.424 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:03.424 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.424 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.683 [2024-10-21 09:57:40.023157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.683 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.683 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:03.683 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:03.683 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.684 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:03.944 [2024-10-21 09:57:40.286508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:03.944 /dev/nbd0 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.944 1+0 records in 00:13:03.944 1+0 records out 00:13:03.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296555 s, 13.8 MB/s 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:03.944 09:57:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:09.219 63488+0 records in 00:13:09.219 63488+0 records out 00:13:09.219 32505856 bytes (33 MB, 31 MiB) copied, 4.8305 s, 6.7 MB/s 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:09.219 [2024-10-21 09:57:45.409422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.219 [2024-10-21 09:57:45.445506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.219 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.220 "name": "raid_bdev1", 00:13:09.220 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:09.220 "strip_size_kb": 0, 00:13:09.220 "state": "online", 00:13:09.220 "raid_level": "raid1", 00:13:09.220 "superblock": true, 00:13:09.220 "num_base_bdevs": 2, 00:13:09.220 "num_base_bdevs_discovered": 1, 00:13:09.220 "num_base_bdevs_operational": 1, 00:13:09.220 "base_bdevs_list": [ 00:13:09.220 { 00:13:09.220 "name": null, 00:13:09.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.220 "is_configured": false, 00:13:09.220 "data_offset": 0, 00:13:09.220 "data_size": 63488 00:13:09.220 }, 00:13:09.220 { 00:13:09.220 "name": "BaseBdev2", 00:13:09.220 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:09.220 "is_configured": true, 00:13:09.220 "data_offset": 2048, 00:13:09.220 "data_size": 63488 00:13:09.220 } 00:13:09.220 ] 00:13:09.220 }' 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.220 09:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.480 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.480 09:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.480 09:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.480 [2024-10-21 09:57:45.900731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.480 [2024-10-21 09:57:45.920006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca30f0 00:13:09.480 09:57:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.480 09:57:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:09.480 [2024-10-21 09:57:45.922303] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.418 "name": "raid_bdev1", 00:13:10.418 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:10.418 "strip_size_kb": 0, 00:13:10.418 "state": "online", 00:13:10.418 "raid_level": "raid1", 00:13:10.418 "superblock": true, 00:13:10.418 "num_base_bdevs": 2, 00:13:10.418 "num_base_bdevs_discovered": 2, 00:13:10.418 "num_base_bdevs_operational": 2, 00:13:10.418 "process": { 00:13:10.418 "type": "rebuild", 00:13:10.418 "target": "spare", 00:13:10.418 "progress": { 00:13:10.418 "blocks": 20480, 00:13:10.418 "percent": 32 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 "base_bdevs_list": [ 00:13:10.418 { 00:13:10.418 "name": "spare", 00:13:10.418 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:10.418 "is_configured": true, 00:13:10.418 "data_offset": 2048, 00:13:10.418 "data_size": 63488 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "name": "BaseBdev2", 00:13:10.418 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:10.418 "is_configured": true, 00:13:10.418 "data_offset": 2048, 00:13:10.418 "data_size": 63488 00:13:10.418 } 00:13:10.418 ] 00:13:10.418 }' 00:13:10.418 09:57:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.677 [2024-10-21 09:57:47.101447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.677 [2024-10-21 09:57:47.129149] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:10.677 [2024-10-21 09:57:47.129296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.677 [2024-10-21 09:57:47.129318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.677 [2024-10-21 09:57:47.129333] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.677 "name": "raid_bdev1", 00:13:10.677 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:10.677 "strip_size_kb": 0, 00:13:10.677 "state": "online", 00:13:10.677 "raid_level": "raid1", 00:13:10.677 "superblock": true, 00:13:10.677 "num_base_bdevs": 2, 00:13:10.677 "num_base_bdevs_discovered": 1, 00:13:10.677 "num_base_bdevs_operational": 1, 00:13:10.677 "base_bdevs_list": [ 00:13:10.677 { 00:13:10.677 "name": null, 00:13:10.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.677 "is_configured": false, 00:13:10.677 "data_offset": 0, 00:13:10.677 "data_size": 63488 00:13:10.677 }, 00:13:10.677 { 00:13:10.677 "name": "BaseBdev2", 00:13:10.677 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:10.677 "is_configured": true, 00:13:10.677 "data_offset": 2048, 00:13:10.677 "data_size": 63488 00:13:10.677 } 00:13:10.677 ] 00:13:10.677 }' 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.677 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.244 "name": "raid_bdev1", 00:13:11.244 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:11.244 "strip_size_kb": 0, 00:13:11.244 "state": "online", 00:13:11.244 "raid_level": "raid1", 00:13:11.244 "superblock": true, 00:13:11.244 "num_base_bdevs": 2, 00:13:11.244 "num_base_bdevs_discovered": 1, 00:13:11.244 "num_base_bdevs_operational": 1, 00:13:11.244 "base_bdevs_list": [ 00:13:11.244 { 00:13:11.244 "name": null, 00:13:11.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.244 "is_configured": false, 00:13:11.244 "data_offset": 0, 00:13:11.244 "data_size": 63488 00:13:11.244 }, 00:13:11.244 { 00:13:11.244 "name": "BaseBdev2", 00:13:11.244 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:11.244 "is_configured": true, 00:13:11.244 "data_offset": 2048, 00:13:11.244 "data_size": 63488 00:13:11.244 } 00:13:11.244 ] 00:13:11.244 }' 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.244 [2024-10-21 09:57:47.733710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.244 [2024-10-21 09:57:47.753404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.244 09:57:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:11.244 [2024-10-21 09:57:47.755905] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.180 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.180 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.180 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.180 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.180 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.180 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.180 09:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.180 09:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.180 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.439 "name": "raid_bdev1", 00:13:12.439 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:12.439 "strip_size_kb": 0, 00:13:12.439 "state": "online", 00:13:12.439 "raid_level": "raid1", 00:13:12.439 "superblock": true, 00:13:12.439 "num_base_bdevs": 2, 00:13:12.439 "num_base_bdevs_discovered": 2, 00:13:12.439 "num_base_bdevs_operational": 2, 00:13:12.439 "process": { 00:13:12.439 "type": "rebuild", 00:13:12.439 "target": "spare", 00:13:12.439 "progress": { 00:13:12.439 "blocks": 20480, 00:13:12.439 "percent": 32 00:13:12.439 } 00:13:12.439 }, 00:13:12.439 "base_bdevs_list": [ 00:13:12.439 { 00:13:12.439 "name": "spare", 00:13:12.439 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:12.439 "is_configured": true, 00:13:12.439 "data_offset": 2048, 00:13:12.439 "data_size": 63488 00:13:12.439 }, 00:13:12.439 { 00:13:12.439 "name": "BaseBdev2", 00:13:12.439 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:12.439 "is_configured": true, 00:13:12.439 "data_offset": 2048, 00:13:12.439 "data_size": 63488 00:13:12.439 } 00:13:12.439 ] 00:13:12.439 }' 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:12.439 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.439 09:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.440 09:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.440 09:57:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.440 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.440 "name": "raid_bdev1", 00:13:12.440 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:12.440 "strip_size_kb": 0, 00:13:12.440 "state": "online", 00:13:12.440 "raid_level": "raid1", 00:13:12.440 "superblock": true, 00:13:12.440 "num_base_bdevs": 2, 00:13:12.440 "num_base_bdevs_discovered": 2, 00:13:12.440 "num_base_bdevs_operational": 2, 00:13:12.440 "process": { 00:13:12.440 "type": "rebuild", 00:13:12.440 "target": "spare", 00:13:12.440 "progress": { 00:13:12.440 "blocks": 22528, 00:13:12.440 "percent": 35 00:13:12.440 } 00:13:12.440 }, 00:13:12.440 "base_bdevs_list": [ 00:13:12.440 { 00:13:12.440 "name": "spare", 00:13:12.440 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:12.440 "is_configured": true, 00:13:12.440 "data_offset": 2048, 00:13:12.440 "data_size": 63488 00:13:12.440 }, 00:13:12.440 { 00:13:12.440 "name": "BaseBdev2", 00:13:12.440 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:12.440 "is_configured": true, 00:13:12.440 "data_offset": 2048, 00:13:12.440 "data_size": 63488 00:13:12.440 } 00:13:12.440 ] 00:13:12.440 }' 00:13:12.440 09:57:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.440 09:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.440 09:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.699 09:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.699 09:57:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.637 "name": "raid_bdev1", 00:13:13.637 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:13.637 "strip_size_kb": 0, 00:13:13.637 "state": "online", 00:13:13.637 "raid_level": "raid1", 00:13:13.637 "superblock": true, 00:13:13.637 "num_base_bdevs": 2, 00:13:13.637 "num_base_bdevs_discovered": 2, 00:13:13.637 "num_base_bdevs_operational": 2, 00:13:13.637 "process": { 00:13:13.637 "type": "rebuild", 00:13:13.637 "target": "spare", 00:13:13.637 "progress": { 00:13:13.637 "blocks": 45056, 00:13:13.637 "percent": 70 00:13:13.637 } 00:13:13.637 }, 00:13:13.637 "base_bdevs_list": [ 00:13:13.637 { 00:13:13.637 "name": "spare", 00:13:13.637 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:13.637 "is_configured": true, 00:13:13.637 "data_offset": 2048, 00:13:13.637 "data_size": 63488 00:13:13.637 }, 00:13:13.637 { 00:13:13.637 "name": "BaseBdev2", 00:13:13.637 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:13.637 "is_configured": true, 00:13:13.637 "data_offset": 2048, 00:13:13.637 "data_size": 63488 00:13:13.637 } 00:13:13.637 ] 00:13:13.637 }' 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.637 09:57:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.577 [2024-10-21 09:57:50.884556] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:14.577 [2024-10-21 09:57:50.884823] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:14.577 [2024-10-21 09:57:50.885059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.836 "name": "raid_bdev1", 00:13:14.836 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:14.836 "strip_size_kb": 0, 00:13:14.836 "state": "online", 00:13:14.836 "raid_level": "raid1", 00:13:14.836 "superblock": true, 00:13:14.836 "num_base_bdevs": 2, 00:13:14.836 "num_base_bdevs_discovered": 2, 00:13:14.836 "num_base_bdevs_operational": 2, 00:13:14.836 "base_bdevs_list": [ 00:13:14.836 { 00:13:14.836 "name": "spare", 00:13:14.836 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:14.836 "is_configured": true, 00:13:14.836 "data_offset": 2048, 00:13:14.836 "data_size": 63488 00:13:14.836 }, 00:13:14.836 { 00:13:14.836 "name": "BaseBdev2", 00:13:14.836 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:14.836 "is_configured": true, 00:13:14.836 "data_offset": 2048, 00:13:14.836 "data_size": 63488 00:13:14.836 } 00:13:14.836 ] 00:13:14.836 }' 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.836 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.837 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.837 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.837 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.837 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.837 "name": "raid_bdev1", 00:13:14.837 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:14.837 "strip_size_kb": 0, 00:13:14.837 "state": "online", 00:13:14.837 "raid_level": "raid1", 00:13:14.837 "superblock": true, 00:13:14.837 "num_base_bdevs": 2, 00:13:14.837 "num_base_bdevs_discovered": 2, 00:13:14.837 "num_base_bdevs_operational": 2, 00:13:14.837 "base_bdevs_list": [ 00:13:14.837 { 00:13:14.837 "name": "spare", 00:13:14.837 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:14.837 "is_configured": true, 00:13:14.837 "data_offset": 2048, 00:13:14.837 "data_size": 63488 00:13:14.837 }, 00:13:14.837 { 00:13:14.837 "name": "BaseBdev2", 00:13:14.837 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:14.837 "is_configured": true, 00:13:14.837 "data_offset": 2048, 00:13:14.837 "data_size": 63488 00:13:14.837 } 00:13:14.837 ] 00:13:14.837 }' 00:13:15.095 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.095 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.095 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.095 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.095 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.095 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.095 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.095 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.096 "name": "raid_bdev1", 00:13:15.096 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:15.096 "strip_size_kb": 0, 00:13:15.096 "state": "online", 00:13:15.096 "raid_level": "raid1", 00:13:15.096 "superblock": true, 00:13:15.096 "num_base_bdevs": 2, 00:13:15.096 "num_base_bdevs_discovered": 2, 00:13:15.096 "num_base_bdevs_operational": 2, 00:13:15.096 "base_bdevs_list": [ 00:13:15.096 { 00:13:15.096 "name": "spare", 00:13:15.096 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:15.096 "is_configured": true, 00:13:15.096 "data_offset": 2048, 00:13:15.096 "data_size": 63488 00:13:15.096 }, 00:13:15.096 { 00:13:15.096 "name": "BaseBdev2", 00:13:15.096 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:15.096 "is_configured": true, 00:13:15.096 "data_offset": 2048, 00:13:15.096 "data_size": 63488 00:13:15.096 } 00:13:15.096 ] 00:13:15.096 }' 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.096 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.356 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:15.356 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.356 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.356 [2024-10-21 09:57:51.947314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.356 [2024-10-21 09:57:51.947452] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.356 [2024-10-21 09:57:51.947612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.356 [2024-10-21 09:57:51.947737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.356 [2024-10-21 09:57:51.947801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.615 09:57:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:15.615 /dev/nbd0 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.875 1+0 records in 00:13:15.875 1+0 records out 00:13:15.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469623 s, 8.7 MB/s 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.875 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:15.875 /dev/nbd1 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.135 1+0 records in 00:13:16.135 1+0 records out 00:13:16.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346845 s, 11.8 MB/s 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.135 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:16.394 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.394 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.394 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.394 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.394 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.394 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.394 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:16.394 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.394 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.394 09:57:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.653 [2024-10-21 09:57:53.205290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:16.653 [2024-10-21 09:57:53.205382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.653 [2024-10-21 09:57:53.205412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:16.653 [2024-10-21 09:57:53.205422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.653 [2024-10-21 09:57:53.207919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.653 [2024-10-21 09:57:53.208029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:16.653 [2024-10-21 09:57:53.208164] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:16.653 [2024-10-21 09:57:53.208225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.653 [2024-10-21 09:57:53.208384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.653 spare 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.653 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.912 [2024-10-21 09:57:53.308308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:13:16.913 [2024-10-21 09:57:53.308385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:16.913 [2024-10-21 09:57:53.308826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1870 00:13:16.913 [2024-10-21 09:57:53.309071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:13:16.913 [2024-10-21 09:57:53.309099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005f00 00:13:16.913 [2024-10-21 09:57:53.309365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.913 "name": "raid_bdev1", 00:13:16.913 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:16.913 "strip_size_kb": 0, 00:13:16.913 "state": "online", 00:13:16.913 "raid_level": "raid1", 00:13:16.913 "superblock": true, 00:13:16.913 "num_base_bdevs": 2, 00:13:16.913 "num_base_bdevs_discovered": 2, 00:13:16.913 "num_base_bdevs_operational": 2, 00:13:16.913 "base_bdevs_list": [ 00:13:16.913 { 00:13:16.913 "name": "spare", 00:13:16.913 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:16.913 "is_configured": true, 00:13:16.913 "data_offset": 2048, 00:13:16.913 "data_size": 63488 00:13:16.913 }, 00:13:16.913 { 00:13:16.913 "name": "BaseBdev2", 00:13:16.913 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:16.913 "is_configured": true, 00:13:16.913 "data_offset": 2048, 00:13:16.913 "data_size": 63488 00:13:16.913 } 00:13:16.913 ] 00:13:16.913 }' 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.913 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.489 "name": "raid_bdev1", 00:13:17.489 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:17.489 "strip_size_kb": 0, 00:13:17.489 "state": "online", 00:13:17.489 "raid_level": "raid1", 00:13:17.489 "superblock": true, 00:13:17.489 "num_base_bdevs": 2, 00:13:17.489 "num_base_bdevs_discovered": 2, 00:13:17.489 "num_base_bdevs_operational": 2, 00:13:17.489 "base_bdevs_list": [ 00:13:17.489 { 00:13:17.489 "name": "spare", 00:13:17.489 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:17.489 "is_configured": true, 00:13:17.489 "data_offset": 2048, 00:13:17.489 "data_size": 63488 00:13:17.489 }, 00:13:17.489 { 00:13:17.489 "name": "BaseBdev2", 00:13:17.489 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:17.489 "is_configured": true, 00:13:17.489 "data_offset": 2048, 00:13:17.489 "data_size": 63488 00:13:17.489 } 00:13:17.489 ] 00:13:17.489 }' 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.489 [2024-10-21 09:57:53.992186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.489 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.490 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.490 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.490 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.490 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.490 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.490 09:57:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.490 09:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.490 09:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.490 09:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.490 09:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.490 09:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.490 09:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.490 "name": "raid_bdev1", 00:13:17.490 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:17.490 "strip_size_kb": 0, 00:13:17.490 "state": "online", 00:13:17.490 "raid_level": "raid1", 00:13:17.490 "superblock": true, 00:13:17.490 "num_base_bdevs": 2, 00:13:17.490 "num_base_bdevs_discovered": 1, 00:13:17.490 "num_base_bdevs_operational": 1, 00:13:17.490 "base_bdevs_list": [ 00:13:17.490 { 00:13:17.490 "name": null, 00:13:17.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.490 "is_configured": false, 00:13:17.490 "data_offset": 0, 00:13:17.490 "data_size": 63488 00:13:17.490 }, 00:13:17.490 { 00:13:17.490 "name": "BaseBdev2", 00:13:17.490 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:17.490 "is_configured": true, 00:13:17.490 "data_offset": 2048, 00:13:17.490 "data_size": 63488 00:13:17.490 } 00:13:17.490 ] 00:13:17.490 }' 00:13:17.490 09:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.490 09:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.071 09:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.071 09:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.071 09:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.071 [2024-10-21 09:57:54.443506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.071 [2024-10-21 09:57:54.443930] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:18.071 [2024-10-21 09:57:54.444043] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:18.071 [2024-10-21 09:57:54.444155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.071 [2024-10-21 09:57:54.463080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:13:18.071 09:57:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.071 09:57:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:18.071 [2024-10-21 09:57:54.465418] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.011 "name": "raid_bdev1", 00:13:19.011 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:19.011 "strip_size_kb": 0, 00:13:19.011 "state": "online", 00:13:19.011 "raid_level": "raid1", 00:13:19.011 "superblock": true, 00:13:19.011 "num_base_bdevs": 2, 00:13:19.011 "num_base_bdevs_discovered": 2, 00:13:19.011 "num_base_bdevs_operational": 2, 00:13:19.011 "process": { 00:13:19.011 "type": "rebuild", 00:13:19.011 "target": "spare", 00:13:19.011 "progress": { 00:13:19.011 "blocks": 20480, 00:13:19.011 "percent": 32 00:13:19.011 } 00:13:19.011 }, 00:13:19.011 "base_bdevs_list": [ 00:13:19.011 { 00:13:19.011 "name": "spare", 00:13:19.011 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:19.011 "is_configured": true, 00:13:19.011 "data_offset": 2048, 00:13:19.011 "data_size": 63488 00:13:19.011 }, 00:13:19.011 { 00:13:19.011 "name": "BaseBdev2", 00:13:19.011 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:19.011 "is_configured": true, 00:13:19.011 "data_offset": 2048, 00:13:19.011 "data_size": 63488 00:13:19.011 } 00:13:19.011 ] 00:13:19.011 }' 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.011 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.271 [2024-10-21 09:57:55.629025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.271 [2024-10-21 09:57:55.672858] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:19.271 [2024-10-21 09:57:55.673083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.271 [2024-10-21 09:57:55.673123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.271 [2024-10-21 09:57:55.673149] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.271 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.271 "name": "raid_bdev1", 00:13:19.271 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:19.271 "strip_size_kb": 0, 00:13:19.271 "state": "online", 00:13:19.271 "raid_level": "raid1", 00:13:19.271 "superblock": true, 00:13:19.271 "num_base_bdevs": 2, 00:13:19.271 "num_base_bdevs_discovered": 1, 00:13:19.271 "num_base_bdevs_operational": 1, 00:13:19.271 "base_bdevs_list": [ 00:13:19.271 { 00:13:19.271 "name": null, 00:13:19.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.271 "is_configured": false, 00:13:19.271 "data_offset": 0, 00:13:19.271 "data_size": 63488 00:13:19.271 }, 00:13:19.271 { 00:13:19.271 "name": "BaseBdev2", 00:13:19.271 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:19.271 "is_configured": true, 00:13:19.271 "data_offset": 2048, 00:13:19.271 "data_size": 63488 00:13:19.271 } 00:13:19.272 ] 00:13:19.272 }' 00:13:19.272 09:57:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.272 09:57:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.840 09:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.840 09:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.840 09:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.840 [2024-10-21 09:57:56.156548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.840 [2024-10-21 09:57:56.156645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.840 [2024-10-21 09:57:56.156671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:19.840 [2024-10-21 09:57:56.156684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.840 [2024-10-21 09:57:56.157248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.840 [2024-10-21 09:57:56.157293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.840 [2024-10-21 09:57:56.157407] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:19.840 [2024-10-21 09:57:56.157427] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:19.840 [2024-10-21 09:57:56.157438] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:19.840 [2024-10-21 09:57:56.157471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.840 [2024-10-21 09:57:56.176958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:13:19.840 spare 00:13:19.840 09:57:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.840 09:57:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:19.840 [2024-10-21 09:57:56.179141] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.778 "name": "raid_bdev1", 00:13:20.778 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:20.778 "strip_size_kb": 0, 00:13:20.778 "state": "online", 00:13:20.778 "raid_level": "raid1", 00:13:20.778 "superblock": true, 00:13:20.778 "num_base_bdevs": 2, 00:13:20.778 "num_base_bdevs_discovered": 2, 00:13:20.778 "num_base_bdevs_operational": 2, 00:13:20.778 "process": { 00:13:20.778 "type": "rebuild", 00:13:20.778 "target": "spare", 00:13:20.778 "progress": { 00:13:20.778 "blocks": 20480, 00:13:20.778 "percent": 32 00:13:20.778 } 00:13:20.778 }, 00:13:20.778 "base_bdevs_list": [ 00:13:20.778 { 00:13:20.778 "name": "spare", 00:13:20.778 "uuid": "bec86e08-3511-5590-a61e-6454f1ab284d", 00:13:20.778 "is_configured": true, 00:13:20.778 "data_offset": 2048, 00:13:20.778 "data_size": 63488 00:13:20.778 }, 00:13:20.778 { 00:13:20.778 "name": "BaseBdev2", 00:13:20.778 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:20.778 "is_configured": true, 00:13:20.778 "data_offset": 2048, 00:13:20.778 "data_size": 63488 00:13:20.778 } 00:13:20.778 ] 00:13:20.778 }' 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.778 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.778 [2024-10-21 09:57:57.339015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.039 [2024-10-21 09:57:57.385419] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:21.039 [2024-10-21 09:57:57.385512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.039 [2024-10-21 09:57:57.385532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.039 [2024-10-21 09:57:57.385539] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.039 "name": "raid_bdev1", 00:13:21.039 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:21.039 "strip_size_kb": 0, 00:13:21.039 "state": "online", 00:13:21.039 "raid_level": "raid1", 00:13:21.039 "superblock": true, 00:13:21.039 "num_base_bdevs": 2, 00:13:21.039 "num_base_bdevs_discovered": 1, 00:13:21.039 "num_base_bdevs_operational": 1, 00:13:21.039 "base_bdevs_list": [ 00:13:21.039 { 00:13:21.039 "name": null, 00:13:21.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.039 "is_configured": false, 00:13:21.039 "data_offset": 0, 00:13:21.039 "data_size": 63488 00:13:21.039 }, 00:13:21.039 { 00:13:21.039 "name": "BaseBdev2", 00:13:21.039 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:21.039 "is_configured": true, 00:13:21.039 "data_offset": 2048, 00:13:21.039 "data_size": 63488 00:13:21.039 } 00:13:21.039 ] 00:13:21.039 }' 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.039 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.299 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.299 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.299 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.299 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.299 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.299 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.299 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.299 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.299 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.299 09:57:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.559 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.559 "name": "raid_bdev1", 00:13:21.559 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:21.559 "strip_size_kb": 0, 00:13:21.559 "state": "online", 00:13:21.559 "raid_level": "raid1", 00:13:21.559 "superblock": true, 00:13:21.559 "num_base_bdevs": 2, 00:13:21.559 "num_base_bdevs_discovered": 1, 00:13:21.559 "num_base_bdevs_operational": 1, 00:13:21.559 "base_bdevs_list": [ 00:13:21.559 { 00:13:21.559 "name": null, 00:13:21.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.559 "is_configured": false, 00:13:21.559 "data_offset": 0, 00:13:21.559 "data_size": 63488 00:13:21.559 }, 00:13:21.559 { 00:13:21.559 "name": "BaseBdev2", 00:13:21.559 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:21.559 "is_configured": true, 00:13:21.559 "data_offset": 2048, 00:13:21.559 "data_size": 63488 00:13:21.559 } 00:13:21.559 ] 00:13:21.559 }' 00:13:21.559 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.559 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.559 09:57:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.559 09:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.559 09:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:21.559 09:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.559 09:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.559 09:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.559 09:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:21.559 09:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.559 09:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.559 [2024-10-21 09:57:58.017387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:21.559 [2024-10-21 09:57:58.017523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.559 [2024-10-21 09:57:58.017556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:21.559 [2024-10-21 09:57:58.017580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.559 [2024-10-21 09:57:58.018102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.559 [2024-10-21 09:57:58.018123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:21.559 [2024-10-21 09:57:58.018234] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:21.559 [2024-10-21 09:57:58.018248] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:21.559 [2024-10-21 09:57:58.018263] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:21.559 [2024-10-21 09:57:58.018277] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:21.559 BaseBdev1 00:13:21.559 09:57:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.559 09:57:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.530 "name": "raid_bdev1", 00:13:22.530 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:22.530 "strip_size_kb": 0, 00:13:22.530 "state": "online", 00:13:22.530 "raid_level": "raid1", 00:13:22.530 "superblock": true, 00:13:22.530 "num_base_bdevs": 2, 00:13:22.530 "num_base_bdevs_discovered": 1, 00:13:22.530 "num_base_bdevs_operational": 1, 00:13:22.530 "base_bdevs_list": [ 00:13:22.530 { 00:13:22.530 "name": null, 00:13:22.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.530 "is_configured": false, 00:13:22.530 "data_offset": 0, 00:13:22.530 "data_size": 63488 00:13:22.530 }, 00:13:22.530 { 00:13:22.530 "name": "BaseBdev2", 00:13:22.530 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:22.530 "is_configured": true, 00:13:22.530 "data_offset": 2048, 00:13:22.530 "data_size": 63488 00:13:22.530 } 00:13:22.530 ] 00:13:22.530 }' 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.530 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.100 "name": "raid_bdev1", 00:13:23.100 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:23.100 "strip_size_kb": 0, 00:13:23.100 "state": "online", 00:13:23.100 "raid_level": "raid1", 00:13:23.100 "superblock": true, 00:13:23.100 "num_base_bdevs": 2, 00:13:23.100 "num_base_bdevs_discovered": 1, 00:13:23.100 "num_base_bdevs_operational": 1, 00:13:23.100 "base_bdevs_list": [ 00:13:23.100 { 00:13:23.100 "name": null, 00:13:23.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.100 "is_configured": false, 00:13:23.100 "data_offset": 0, 00:13:23.100 "data_size": 63488 00:13:23.100 }, 00:13:23.100 { 00:13:23.100 "name": "BaseBdev2", 00:13:23.100 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:23.100 "is_configured": true, 00:13:23.100 "data_offset": 2048, 00:13:23.100 "data_size": 63488 00:13:23.100 } 00:13:23.100 ] 00:13:23.100 }' 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.100 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.101 [2024-10-21 09:57:59.630733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.101 [2024-10-21 09:57:59.631007] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:23.101 [2024-10-21 09:57:59.631029] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:23.101 request: 00:13:23.101 { 00:13:23.101 "base_bdev": "BaseBdev1", 00:13:23.101 "raid_bdev": "raid_bdev1", 00:13:23.101 "method": "bdev_raid_add_base_bdev", 00:13:23.101 "req_id": 1 00:13:23.101 } 00:13:23.101 Got JSON-RPC error response 00:13:23.101 response: 00:13:23.101 { 00:13:23.101 "code": -22, 00:13:23.101 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:23.101 } 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:23.101 09:57:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.482 "name": "raid_bdev1", 00:13:24.482 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:24.482 "strip_size_kb": 0, 00:13:24.482 "state": "online", 00:13:24.482 "raid_level": "raid1", 00:13:24.482 "superblock": true, 00:13:24.482 "num_base_bdevs": 2, 00:13:24.482 "num_base_bdevs_discovered": 1, 00:13:24.482 "num_base_bdevs_operational": 1, 00:13:24.482 "base_bdevs_list": [ 00:13:24.482 { 00:13:24.482 "name": null, 00:13:24.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.482 "is_configured": false, 00:13:24.482 "data_offset": 0, 00:13:24.482 "data_size": 63488 00:13:24.482 }, 00:13:24.482 { 00:13:24.482 "name": "BaseBdev2", 00:13:24.482 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:24.482 "is_configured": true, 00:13:24.482 "data_offset": 2048, 00:13:24.482 "data_size": 63488 00:13:24.482 } 00:13:24.482 ] 00:13:24.482 }' 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.482 09:58:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.742 "name": "raid_bdev1", 00:13:24.742 "uuid": "8ec6b5e6-f49b-455e-b726-1b0c3770a0a4", 00:13:24.742 "strip_size_kb": 0, 00:13:24.742 "state": "online", 00:13:24.742 "raid_level": "raid1", 00:13:24.742 "superblock": true, 00:13:24.742 "num_base_bdevs": 2, 00:13:24.742 "num_base_bdevs_discovered": 1, 00:13:24.742 "num_base_bdevs_operational": 1, 00:13:24.742 "base_bdevs_list": [ 00:13:24.742 { 00:13:24.742 "name": null, 00:13:24.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.742 "is_configured": false, 00:13:24.742 "data_offset": 0, 00:13:24.742 "data_size": 63488 00:13:24.742 }, 00:13:24.742 { 00:13:24.742 "name": "BaseBdev2", 00:13:24.742 "uuid": "8c5b3cc7-a0d6-5a4c-adb9-688cb0007c51", 00:13:24.742 "is_configured": true, 00:13:24.742 "data_offset": 2048, 00:13:24.742 "data_size": 63488 00:13:24.742 } 00:13:24.742 ] 00:13:24.742 }' 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75354 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75354 ']' 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75354 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75354 00:13:24.742 killing process with pid 75354 00:13:24.742 Received shutdown signal, test time was about 60.000000 seconds 00:13:24.742 00:13:24.742 Latency(us) 00:13:24.742 [2024-10-21T09:58:01.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.742 [2024-10-21T09:58:01.337Z] =================================================================================================================== 00:13:24.742 [2024-10-21T09:58:01.337Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75354' 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75354 00:13:24.742 [2024-10-21 09:58:01.266896] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:24.742 [2024-10-21 09:58:01.267029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.742 09:58:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75354 00:13:24.742 [2024-10-21 09:58:01.267081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.742 [2024-10-21 09:58:01.267092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state offline 00:13:25.310 [2024-10-21 09:58:01.610155] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.249 09:58:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:26.249 ************************************ 00:13:26.249 END TEST raid_rebuild_test_sb 00:13:26.249 ************************************ 00:13:26.249 00:13:26.249 real 0m24.373s 00:13:26.249 user 0m29.075s 00:13:26.249 sys 0m4.171s 00:13:26.249 09:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.249 09:58:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.507 09:58:02 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:26.507 09:58:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:26.507 09:58:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.507 09:58:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:26.507 ************************************ 00:13:26.507 START TEST raid_rebuild_test_io 00:13:26.507 ************************************ 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:26.507 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76096 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76096 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76096 ']' 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:26.508 09:58:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.508 [2024-10-21 09:58:03.015030] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:13:26.508 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:26.508 Zero copy mechanism will not be used. 00:13:26.508 [2024-10-21 09:58:03.015260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76096 ] 00:13:26.770 [2024-10-21 09:58:03.183600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.770 [2024-10-21 09:58:03.329372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.030 [2024-10-21 09:58:03.583221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.030 [2024-10-21 09:58:03.583329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.290 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.290 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:27.290 09:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.290 09:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:27.290 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.290 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.550 BaseBdev1_malloc 00:13:27.550 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.550 09:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:27.550 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.550 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.550 [2024-10-21 09:58:03.897831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:27.550 [2024-10-21 09:58:03.898188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.550 [2024-10-21 09:58:03.898276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:13:27.550 [2024-10-21 09:58:03.898364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.551 [2024-10-21 09:58:03.900950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.551 [2024-10-21 09:58:03.901062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:27.551 BaseBdev1 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.551 BaseBdev2_malloc 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.551 [2024-10-21 09:58:03.961876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:27.551 [2024-10-21 09:58:03.962198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.551 [2024-10-21 09:58:03.962272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:13:27.551 [2024-10-21 09:58:03.962326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.551 [2024-10-21 09:58:03.964747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.551 [2024-10-21 09:58:03.964880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:27.551 BaseBdev2 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.551 09:58:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.551 spare_malloc 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.551 spare_delay 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.551 [2024-10-21 09:58:04.054765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.551 [2024-10-21 09:58:04.055231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.551 [2024-10-21 09:58:04.055262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:13:27.551 [2024-10-21 09:58:04.055274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.551 [2024-10-21 09:58:04.057817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.551 [2024-10-21 09:58:04.057973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:27.551 spare 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.551 [2024-10-21 09:58:04.066870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.551 [2024-10-21 09:58:04.068971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.551 [2024-10-21 09:58:04.069065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:13:27.551 [2024-10-21 09:58:04.069077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:27.551 [2024-10-21 09:58:04.069333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:27.551 [2024-10-21 09:58:04.069498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:13:27.551 [2024-10-21 09:58:04.069507] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:13:27.551 [2024-10-21 09:58:04.069689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.551 "name": "raid_bdev1", 00:13:27.551 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:27.551 "strip_size_kb": 0, 00:13:27.551 "state": "online", 00:13:27.551 "raid_level": "raid1", 00:13:27.551 "superblock": false, 00:13:27.551 "num_base_bdevs": 2, 00:13:27.551 "num_base_bdevs_discovered": 2, 00:13:27.551 "num_base_bdevs_operational": 2, 00:13:27.551 "base_bdevs_list": [ 00:13:27.551 { 00:13:27.551 "name": "BaseBdev1", 00:13:27.551 "uuid": "825ae93d-9499-5f51-85f0-697126942b43", 00:13:27.551 "is_configured": true, 00:13:27.551 "data_offset": 0, 00:13:27.551 "data_size": 65536 00:13:27.551 }, 00:13:27.551 { 00:13:27.551 "name": "BaseBdev2", 00:13:27.551 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:27.551 "is_configured": true, 00:13:27.551 "data_offset": 0, 00:13:27.551 "data_size": 65536 00:13:27.551 } 00:13:27.551 ] 00:13:27.551 }' 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.551 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.121 [2024-10-21 09:58:04.502479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.121 [2024-10-21 09:58:04.558052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.121 "name": "raid_bdev1", 00:13:28.121 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:28.121 "strip_size_kb": 0, 00:13:28.121 "state": "online", 00:13:28.121 "raid_level": "raid1", 00:13:28.121 "superblock": false, 00:13:28.121 "num_base_bdevs": 2, 00:13:28.121 "num_base_bdevs_discovered": 1, 00:13:28.121 "num_base_bdevs_operational": 1, 00:13:28.121 "base_bdevs_list": [ 00:13:28.121 { 00:13:28.121 "name": null, 00:13:28.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.121 "is_configured": false, 00:13:28.121 "data_offset": 0, 00:13:28.121 "data_size": 65536 00:13:28.121 }, 00:13:28.121 { 00:13:28.121 "name": "BaseBdev2", 00:13:28.121 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:28.121 "is_configured": true, 00:13:28.121 "data_offset": 0, 00:13:28.121 "data_size": 65536 00:13:28.121 } 00:13:28.121 ] 00:13:28.121 }' 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.121 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.121 [2024-10-21 09:58:04.643580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:28.121 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:28.121 Zero copy mechanism will not be used. 00:13:28.121 Running I/O for 60 seconds... 00:13:28.381 09:58:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:28.381 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.381 09:58:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.381 [2024-10-21 09:58:04.962100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.641 09:58:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.641 09:58:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:28.641 [2024-10-21 09:58:05.029588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:28.641 [2024-10-21 09:58:05.031826] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.641 [2024-10-21 09:58:05.140638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:28.641 [2024-10-21 09:58:05.141604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:28.901 [2024-10-21 09:58:05.351833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:28.901 [2024-10-21 09:58:05.352486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:29.160 151.00 IOPS, 453.00 MiB/s [2024-10-21T09:58:05.755Z] [2024-10-21 09:58:05.687209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:29.160 [2024-10-21 09:58:05.688192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:29.420 [2024-10-21 09:58:05.926066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:29.420 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.420 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.420 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.420 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.680 "name": "raid_bdev1", 00:13:29.680 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:29.680 "strip_size_kb": 0, 00:13:29.680 "state": "online", 00:13:29.680 "raid_level": "raid1", 00:13:29.680 "superblock": false, 00:13:29.680 "num_base_bdevs": 2, 00:13:29.680 "num_base_bdevs_discovered": 2, 00:13:29.680 "num_base_bdevs_operational": 2, 00:13:29.680 "process": { 00:13:29.680 "type": "rebuild", 00:13:29.680 "target": "spare", 00:13:29.680 "progress": { 00:13:29.680 "blocks": 10240, 00:13:29.680 "percent": 15 00:13:29.680 } 00:13:29.680 }, 00:13:29.680 "base_bdevs_list": [ 00:13:29.680 { 00:13:29.680 "name": "spare", 00:13:29.680 "uuid": "a2a7c3ec-dea2-57bd-b9a8-1e6a6e8dc8d6", 00:13:29.680 "is_configured": true, 00:13:29.680 "data_offset": 0, 00:13:29.680 "data_size": 65536 00:13:29.680 }, 00:13:29.680 { 00:13:29.680 "name": "BaseBdev2", 00:13:29.680 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:29.680 "is_configured": true, 00:13:29.680 "data_offset": 0, 00:13:29.680 "data_size": 65536 00:13:29.680 } 00:13:29.680 ] 00:13:29.680 }' 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.680 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.680 [2024-10-21 09:58:06.139500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.680 [2024-10-21 09:58:06.160216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:29.680 [2024-10-21 09:58:06.265431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:29.941 [2024-10-21 09:58:06.281290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.941 [2024-10-21 09:58:06.281416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.941 [2024-10-21 09:58:06.281452] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:29.941 [2024-10-21 09:58:06.326093] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.941 "name": "raid_bdev1", 00:13:29.941 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:29.941 "strip_size_kb": 0, 00:13:29.941 "state": "online", 00:13:29.941 "raid_level": "raid1", 00:13:29.941 "superblock": false, 00:13:29.941 "num_base_bdevs": 2, 00:13:29.941 "num_base_bdevs_discovered": 1, 00:13:29.941 "num_base_bdevs_operational": 1, 00:13:29.941 "base_bdevs_list": [ 00:13:29.941 { 00:13:29.941 "name": null, 00:13:29.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.941 "is_configured": false, 00:13:29.941 "data_offset": 0, 00:13:29.941 "data_size": 65536 00:13:29.941 }, 00:13:29.941 { 00:13:29.941 "name": "BaseBdev2", 00:13:29.941 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:29.941 "is_configured": true, 00:13:29.941 "data_offset": 0, 00:13:29.941 "data_size": 65536 00:13:29.941 } 00:13:29.941 ] 00:13:29.941 }' 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.941 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.201 142.50 IOPS, 427.50 MiB/s [2024-10-21T09:58:06.796Z] 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.201 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.201 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.201 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.201 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.201 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.201 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.201 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.201 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.461 "name": "raid_bdev1", 00:13:30.461 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:30.461 "strip_size_kb": 0, 00:13:30.461 "state": "online", 00:13:30.461 "raid_level": "raid1", 00:13:30.461 "superblock": false, 00:13:30.461 "num_base_bdevs": 2, 00:13:30.461 "num_base_bdevs_discovered": 1, 00:13:30.461 "num_base_bdevs_operational": 1, 00:13:30.461 "base_bdevs_list": [ 00:13:30.461 { 00:13:30.461 "name": null, 00:13:30.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.461 "is_configured": false, 00:13:30.461 "data_offset": 0, 00:13:30.461 "data_size": 65536 00:13:30.461 }, 00:13:30.461 { 00:13:30.461 "name": "BaseBdev2", 00:13:30.461 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:30.461 "is_configured": true, 00:13:30.461 "data_offset": 0, 00:13:30.461 "data_size": 65536 00:13:30.461 } 00:13:30.461 ] 00:13:30.461 }' 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.461 [2024-10-21 09:58:06.916874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.461 09:58:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:30.461 [2024-10-21 09:58:06.980113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:30.461 [2024-10-21 09:58:06.982396] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:30.722 [2024-10-21 09:58:07.091513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:30.722 [2024-10-21 09:58:07.092435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:30.722 [2024-10-21 09:58:07.220682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:30.722 [2024-10-21 09:58:07.221205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:30.982 [2024-10-21 09:58:07.553436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:31.241 162.00 IOPS, 486.00 MiB/s [2024-10-21T09:58:07.836Z] [2024-10-21 09:58:07.688092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:31.501 [2024-10-21 09:58:07.919681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:31.501 [2024-10-21 09:58:07.920249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:31.501 09:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.501 09:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.501 09:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.501 09:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.501 09:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.501 09:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.501 09:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.501 09:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.501 09:58:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.501 09:58:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.501 "name": "raid_bdev1", 00:13:31.501 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:31.501 "strip_size_kb": 0, 00:13:31.501 "state": "online", 00:13:31.501 "raid_level": "raid1", 00:13:31.501 "superblock": false, 00:13:31.501 "num_base_bdevs": 2, 00:13:31.501 "num_base_bdevs_discovered": 2, 00:13:31.501 "num_base_bdevs_operational": 2, 00:13:31.501 "process": { 00:13:31.501 "type": "rebuild", 00:13:31.501 "target": "spare", 00:13:31.501 "progress": { 00:13:31.501 "blocks": 14336, 00:13:31.501 "percent": 21 00:13:31.501 } 00:13:31.501 }, 00:13:31.501 "base_bdevs_list": [ 00:13:31.501 { 00:13:31.501 "name": "spare", 00:13:31.501 "uuid": "a2a7c3ec-dea2-57bd-b9a8-1e6a6e8dc8d6", 00:13:31.501 "is_configured": true, 00:13:31.501 "data_offset": 0, 00:13:31.501 "data_size": 65536 00:13:31.501 }, 00:13:31.501 { 00:13:31.501 "name": "BaseBdev2", 00:13:31.501 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:31.501 "is_configured": true, 00:13:31.501 "data_offset": 0, 00:13:31.501 "data_size": 65536 00:13:31.501 } 00:13:31.501 ] 00:13:31.501 }' 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=415 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.501 09:58:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.762 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.762 "name": "raid_bdev1", 00:13:31.762 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:31.762 "strip_size_kb": 0, 00:13:31.762 "state": "online", 00:13:31.762 "raid_level": "raid1", 00:13:31.762 "superblock": false, 00:13:31.762 "num_base_bdevs": 2, 00:13:31.762 "num_base_bdevs_discovered": 2, 00:13:31.762 "num_base_bdevs_operational": 2, 00:13:31.762 "process": { 00:13:31.762 "type": "rebuild", 00:13:31.762 "target": "spare", 00:13:31.762 "progress": { 00:13:31.762 "blocks": 16384, 00:13:31.762 "percent": 25 00:13:31.762 } 00:13:31.762 }, 00:13:31.762 "base_bdevs_list": [ 00:13:31.762 { 00:13:31.762 "name": "spare", 00:13:31.762 "uuid": "a2a7c3ec-dea2-57bd-b9a8-1e6a6e8dc8d6", 00:13:31.762 "is_configured": true, 00:13:31.762 "data_offset": 0, 00:13:31.762 "data_size": 65536 00:13:31.762 }, 00:13:31.762 { 00:13:31.762 "name": "BaseBdev2", 00:13:31.762 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:31.762 "is_configured": true, 00:13:31.762 "data_offset": 0, 00:13:31.762 "data_size": 65536 00:13:31.762 } 00:13:31.762 ] 00:13:31.762 }' 00:13:31.762 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.762 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.762 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.762 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.762 09:58:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.762 [2024-10-21 09:58:08.246358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:32.341 145.50 IOPS, 436.50 MiB/s [2024-10-21T09:58:08.936Z] [2024-10-21 09:58:08.719904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:32.602 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.602 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.602 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.602 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.602 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.602 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.861 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.861 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.861 09:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.861 09:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.861 09:58:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.861 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.861 "name": "raid_bdev1", 00:13:32.861 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:32.861 "strip_size_kb": 0, 00:13:32.861 "state": "online", 00:13:32.861 "raid_level": "raid1", 00:13:32.861 "superblock": false, 00:13:32.861 "num_base_bdevs": 2, 00:13:32.861 "num_base_bdevs_discovered": 2, 00:13:32.861 "num_base_bdevs_operational": 2, 00:13:32.861 "process": { 00:13:32.861 "type": "rebuild", 00:13:32.861 "target": "spare", 00:13:32.861 "progress": { 00:13:32.861 "blocks": 34816, 00:13:32.861 "percent": 53 00:13:32.861 } 00:13:32.861 }, 00:13:32.862 "base_bdevs_list": [ 00:13:32.862 { 00:13:32.862 "name": "spare", 00:13:32.862 "uuid": "a2a7c3ec-dea2-57bd-b9a8-1e6a6e8dc8d6", 00:13:32.862 "is_configured": true, 00:13:32.862 "data_offset": 0, 00:13:32.862 "data_size": 65536 00:13:32.862 }, 00:13:32.862 { 00:13:32.862 "name": "BaseBdev2", 00:13:32.862 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:32.862 "is_configured": true, 00:13:32.862 "data_offset": 0, 00:13:32.862 "data_size": 65536 00:13:32.862 } 00:13:32.862 ] 00:13:32.862 }' 00:13:32.862 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.862 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.862 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.862 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.862 09:58:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.862 [2024-10-21 09:58:09.333289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:32.862 [2024-10-21 09:58:09.334314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:33.690 126.20 IOPS, 378.60 MiB/s [2024-10-21T09:58:10.286Z] [2024-10-21 09:58:10.144515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:33.691 [2024-10-21 09:58:10.255774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:33.950 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.951 "name": "raid_bdev1", 00:13:33.951 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:33.951 "strip_size_kb": 0, 00:13:33.951 "state": "online", 00:13:33.951 "raid_level": "raid1", 00:13:33.951 "superblock": false, 00:13:33.951 "num_base_bdevs": 2, 00:13:33.951 "num_base_bdevs_discovered": 2, 00:13:33.951 "num_base_bdevs_operational": 2, 00:13:33.951 "process": { 00:13:33.951 "type": "rebuild", 00:13:33.951 "target": "spare", 00:13:33.951 "progress": { 00:13:33.951 "blocks": 53248, 00:13:33.951 "percent": 81 00:13:33.951 } 00:13:33.951 }, 00:13:33.951 "base_bdevs_list": [ 00:13:33.951 { 00:13:33.951 "name": "spare", 00:13:33.951 "uuid": "a2a7c3ec-dea2-57bd-b9a8-1e6a6e8dc8d6", 00:13:33.951 "is_configured": true, 00:13:33.951 "data_offset": 0, 00:13:33.951 "data_size": 65536 00:13:33.951 }, 00:13:33.951 { 00:13:33.951 "name": "BaseBdev2", 00:13:33.951 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:33.951 "is_configured": true, 00:13:33.951 "data_offset": 0, 00:13:33.951 "data_size": 65536 00:13:33.951 } 00:13:33.951 ] 00:13:33.951 }' 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.951 09:58:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:33.951 [2024-10-21 09:58:10.471947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:34.471 112.50 IOPS, 337.50 MiB/s [2024-10-21T09:58:11.066Z] [2024-10-21 09:58:10.913602] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:34.471 [2024-10-21 09:58:11.013457] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:34.471 [2024-10-21 09:58:11.016402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.040 "name": "raid_bdev1", 00:13:35.040 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:35.040 "strip_size_kb": 0, 00:13:35.040 "state": "online", 00:13:35.040 "raid_level": "raid1", 00:13:35.040 "superblock": false, 00:13:35.040 "num_base_bdevs": 2, 00:13:35.040 "num_base_bdevs_discovered": 2, 00:13:35.040 "num_base_bdevs_operational": 2, 00:13:35.040 "base_bdevs_list": [ 00:13:35.040 { 00:13:35.040 "name": "spare", 00:13:35.040 "uuid": "a2a7c3ec-dea2-57bd-b9a8-1e6a6e8dc8d6", 00:13:35.040 "is_configured": true, 00:13:35.040 "data_offset": 0, 00:13:35.040 "data_size": 65536 00:13:35.040 }, 00:13:35.040 { 00:13:35.040 "name": "BaseBdev2", 00:13:35.040 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:35.040 "is_configured": true, 00:13:35.040 "data_offset": 0, 00:13:35.040 "data_size": 65536 00:13:35.040 } 00:13:35.040 ] 00:13:35.040 }' 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.040 09:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.301 100.57 IOPS, 301.71 MiB/s [2024-10-21T09:58:11.896Z] 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.301 "name": "raid_bdev1", 00:13:35.301 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:35.301 "strip_size_kb": 0, 00:13:35.301 "state": "online", 00:13:35.301 "raid_level": "raid1", 00:13:35.301 "superblock": false, 00:13:35.301 "num_base_bdevs": 2, 00:13:35.301 "num_base_bdevs_discovered": 2, 00:13:35.301 "num_base_bdevs_operational": 2, 00:13:35.301 "base_bdevs_list": [ 00:13:35.301 { 00:13:35.301 "name": "spare", 00:13:35.301 "uuid": "a2a7c3ec-dea2-57bd-b9a8-1e6a6e8dc8d6", 00:13:35.301 "is_configured": true, 00:13:35.301 "data_offset": 0, 00:13:35.301 "data_size": 65536 00:13:35.301 }, 00:13:35.301 { 00:13:35.301 "name": "BaseBdev2", 00:13:35.301 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:35.301 "is_configured": true, 00:13:35.301 "data_offset": 0, 00:13:35.301 "data_size": 65536 00:13:35.301 } 00:13:35.301 ] 00:13:35.301 }' 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.301 "name": "raid_bdev1", 00:13:35.301 "uuid": "e4e7e4d3-4934-42e6-bf63-63bba38c4012", 00:13:35.301 "strip_size_kb": 0, 00:13:35.301 "state": "online", 00:13:35.301 "raid_level": "raid1", 00:13:35.301 "superblock": false, 00:13:35.301 "num_base_bdevs": 2, 00:13:35.301 "num_base_bdevs_discovered": 2, 00:13:35.301 "num_base_bdevs_operational": 2, 00:13:35.301 "base_bdevs_list": [ 00:13:35.301 { 00:13:35.301 "name": "spare", 00:13:35.301 "uuid": "a2a7c3ec-dea2-57bd-b9a8-1e6a6e8dc8d6", 00:13:35.301 "is_configured": true, 00:13:35.301 "data_offset": 0, 00:13:35.301 "data_size": 65536 00:13:35.301 }, 00:13:35.301 { 00:13:35.301 "name": "BaseBdev2", 00:13:35.301 "uuid": "3f87957c-ab10-548c-82c4-5696da502fa9", 00:13:35.301 "is_configured": true, 00:13:35.301 "data_offset": 0, 00:13:35.301 "data_size": 65536 00:13:35.301 } 00:13:35.301 ] 00:13:35.301 }' 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.301 09:58:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.870 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:35.870 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.870 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.870 [2024-10-21 09:58:12.173937] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:35.870 [2024-10-21 09:58:12.174068] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.870 00:13:35.870 Latency(us) 00:13:35.870 [2024-10-21T09:58:12.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.871 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:35.871 raid_bdev1 : 7.61 95.45 286.34 0.00 0.00 13916.63 321.96 114473.36 00:13:35.871 [2024-10-21T09:58:12.466Z] =================================================================================================================== 00:13:35.871 [2024-10-21T09:58:12.466Z] Total : 95.45 286.34 0.00 0.00 13916.63 321.96 114473.36 00:13:35.871 [2024-10-21 09:58:12.259655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.871 [2024-10-21 09:58:12.259776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.871 [2024-10-21 09:58:12.259890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.871 [2024-10-21 09:58:12.259975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:13:35.871 { 00:13:35.871 "results": [ 00:13:35.871 { 00:13:35.871 "job": "raid_bdev1", 00:13:35.871 "core_mask": "0x1", 00:13:35.871 "workload": "randrw", 00:13:35.871 "percentage": 50, 00:13:35.871 "status": "finished", 00:13:35.871 "queue_depth": 2, 00:13:35.871 "io_size": 3145728, 00:13:35.871 "runtime": 7.606378, 00:13:35.871 "iops": 95.44621632004089, 00:13:35.871 "mibps": 286.33864896012267, 00:13:35.871 "io_failed": 0, 00:13:35.871 "io_timeout": 0, 00:13:35.871 "avg_latency_us": 13916.626501618006, 00:13:35.871 "min_latency_us": 321.95633187772927, 00:13:35.871 "max_latency_us": 114473.36244541485 00:13:35.871 } 00:13:35.871 ], 00:13:35.871 "core_count": 1 00:13:35.871 } 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:35.871 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:36.132 /dev/nbd0 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.132 1+0 records in 00:13:36.132 1+0 records out 00:13:36.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356941 s, 11.5 MB/s 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.132 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:36.392 /dev/nbd1 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.392 1+0 records in 00:13:36.392 1+0 records out 00:13:36.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056116 s, 7.3 MB/s 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.392 09:58:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.652 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76096 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76096 ']' 00:13:36.912 09:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76096 00:13:36.913 09:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:36.913 09:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.913 09:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76096 00:13:36.913 09:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:36.913 09:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:36.913 09:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76096' 00:13:36.913 killing process with pid 76096 00:13:36.913 09:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76096 00:13:36.913 Received shutdown signal, test time was about 8.849830 seconds 00:13:36.913 00:13:36.913 Latency(us) 00:13:36.913 [2024-10-21T09:58:13.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.913 [2024-10-21T09:58:13.508Z] =================================================================================================================== 00:13:36.913 [2024-10-21T09:58:13.508Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:36.913 [2024-10-21 09:58:13.478840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.913 09:58:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76096 00:13:37.173 [2024-10-21 09:58:13.730213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:38.556 09:58:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:38.556 00:13:38.556 real 0m12.070s 00:13:38.556 user 0m14.885s 00:13:38.556 sys 0m1.525s 00:13:38.556 09:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.556 09:58:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.556 ************************************ 00:13:38.556 END TEST raid_rebuild_test_io 00:13:38.556 ************************************ 00:13:38.556 09:58:15 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:38.556 09:58:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:38.556 09:58:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.556 09:58:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:38.556 ************************************ 00:13:38.556 START TEST raid_rebuild_test_sb_io 00:13:38.556 ************************************ 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76473 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76473 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 76473 ']' 00:13:38.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.556 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.816 [2024-10-21 09:58:15.153989] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:13:38.816 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:38.816 Zero copy mechanism will not be used. 00:13:38.816 [2024-10-21 09:58:15.154551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76473 ] 00:13:38.816 [2024-10-21 09:58:15.317483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.085 [2024-10-21 09:58:15.463852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.357 [2024-10-21 09:58:15.711949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.357 [2024-10-21 09:58:15.712038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.617 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.617 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:39.617 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:39.617 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:39.617 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.617 09:58:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.617 BaseBdev1_malloc 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.617 [2024-10-21 09:58:16.049096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:39.617 [2024-10-21 09:58:16.049178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.617 [2024-10-21 09:58:16.049207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:13:39.617 [2024-10-21 09:58:16.049220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.617 [2024-10-21 09:58:16.051680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.617 [2024-10-21 09:58:16.051720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:39.617 BaseBdev1 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.617 BaseBdev2_malloc 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.617 [2024-10-21 09:58:16.112620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:39.617 [2024-10-21 09:58:16.112683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.617 [2024-10-21 09:58:16.112704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:13:39.617 [2024-10-21 09:58:16.112716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.617 [2024-10-21 09:58:16.115052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.617 [2024-10-21 09:58:16.115094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:39.617 BaseBdev2 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.617 spare_malloc 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.617 spare_delay 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.617 [2024-10-21 09:58:16.205006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:39.617 [2024-10-21 09:58:16.205064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.617 [2024-10-21 09:58:16.205083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:13:39.617 [2024-10-21 09:58:16.205095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.617 [2024-10-21 09:58:16.207561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.617 [2024-10-21 09:58:16.207685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:39.617 spare 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.617 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.877 [2024-10-21 09:58:16.217091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.877 [2024-10-21 09:58:16.219287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.877 [2024-10-21 09:58:16.219455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:13:39.877 [2024-10-21 09:58:16.219470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:39.877 [2024-10-21 09:58:16.219758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:39.877 [2024-10-21 09:58:16.219939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:13:39.877 [2024-10-21 09:58:16.219956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:13:39.877 [2024-10-21 09:58:16.220110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.877 "name": "raid_bdev1", 00:13:39.877 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:39.877 "strip_size_kb": 0, 00:13:39.877 "state": "online", 00:13:39.877 "raid_level": "raid1", 00:13:39.877 "superblock": true, 00:13:39.877 "num_base_bdevs": 2, 00:13:39.877 "num_base_bdevs_discovered": 2, 00:13:39.877 "num_base_bdevs_operational": 2, 00:13:39.877 "base_bdevs_list": [ 00:13:39.877 { 00:13:39.877 "name": "BaseBdev1", 00:13:39.877 "uuid": "358aca8a-67fa-5183-b370-2d19ce2a4d8c", 00:13:39.877 "is_configured": true, 00:13:39.877 "data_offset": 2048, 00:13:39.877 "data_size": 63488 00:13:39.877 }, 00:13:39.877 { 00:13:39.877 "name": "BaseBdev2", 00:13:39.877 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:39.877 "is_configured": true, 00:13:39.877 "data_offset": 2048, 00:13:39.877 "data_size": 63488 00:13:39.877 } 00:13:39.877 ] 00:13:39.877 }' 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.877 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.137 [2024-10-21 09:58:16.654600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:40.137 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.396 [2024-10-21 09:58:16.750502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.396 "name": "raid_bdev1", 00:13:40.396 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:40.396 "strip_size_kb": 0, 00:13:40.396 "state": "online", 00:13:40.396 "raid_level": "raid1", 00:13:40.396 "superblock": true, 00:13:40.396 "num_base_bdevs": 2, 00:13:40.396 "num_base_bdevs_discovered": 1, 00:13:40.396 "num_base_bdevs_operational": 1, 00:13:40.396 "base_bdevs_list": [ 00:13:40.396 { 00:13:40.396 "name": null, 00:13:40.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.396 "is_configured": false, 00:13:40.396 "data_offset": 0, 00:13:40.396 "data_size": 63488 00:13:40.396 }, 00:13:40.396 { 00:13:40.396 "name": "BaseBdev2", 00:13:40.396 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:40.396 "is_configured": true, 00:13:40.396 "data_offset": 2048, 00:13:40.396 "data_size": 63488 00:13:40.396 } 00:13:40.396 ] 00:13:40.396 }' 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.396 09:58:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.396 [2024-10-21 09:58:16.852868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:40.396 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:40.396 Zero copy mechanism will not be used. 00:13:40.396 Running I/O for 60 seconds... 00:13:40.656 09:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.656 09:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.656 09:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.656 [2024-10-21 09:58:17.175526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.656 09:58:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.656 09:58:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:40.656 [2024-10-21 09:58:17.212505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:40.656 [2024-10-21 09:58:17.214830] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.915 [2024-10-21 09:58:17.325261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:40.915 [2024-10-21 09:58:17.326202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:41.174 [2024-10-21 09:58:17.529771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:41.174 [2024-10-21 09:58:17.530599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:41.432 159.00 IOPS, 477.00 MiB/s [2024-10-21T09:58:18.027Z] [2024-10-21 09:58:17.982587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:41.432 [2024-10-21 09:58:17.983169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.692 "name": "raid_bdev1", 00:13:41.692 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:41.692 "strip_size_kb": 0, 00:13:41.692 "state": "online", 00:13:41.692 "raid_level": "raid1", 00:13:41.692 "superblock": true, 00:13:41.692 "num_base_bdevs": 2, 00:13:41.692 "num_base_bdevs_discovered": 2, 00:13:41.692 "num_base_bdevs_operational": 2, 00:13:41.692 "process": { 00:13:41.692 "type": "rebuild", 00:13:41.692 "target": "spare", 00:13:41.692 "progress": { 00:13:41.692 "blocks": 12288, 00:13:41.692 "percent": 19 00:13:41.692 } 00:13:41.692 }, 00:13:41.692 "base_bdevs_list": [ 00:13:41.692 { 00:13:41.692 "name": "spare", 00:13:41.692 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:41.692 "is_configured": true, 00:13:41.692 "data_offset": 2048, 00:13:41.692 "data_size": 63488 00:13:41.692 }, 00:13:41.692 { 00:13:41.692 "name": "BaseBdev2", 00:13:41.692 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:41.692 "is_configured": true, 00:13:41.692 "data_offset": 2048, 00:13:41.692 "data_size": 63488 00:13:41.692 } 00:13:41.692 ] 00:13:41.692 }' 00:13:41.692 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.952 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.952 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.952 [2024-10-21 09:58:18.318961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:41.952 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.952 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:41.952 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.952 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.952 [2024-10-21 09:58:18.343082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.952 [2024-10-21 09:58:18.539974] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:42.212 [2024-10-21 09:58:18.549435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.212 [2024-10-21 09:58:18.549580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.212 [2024-10-21 09:58:18.549623] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:42.212 [2024-10-21 09:58:18.595398] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.212 "name": "raid_bdev1", 00:13:42.212 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:42.212 "strip_size_kb": 0, 00:13:42.212 "state": "online", 00:13:42.212 "raid_level": "raid1", 00:13:42.212 "superblock": true, 00:13:42.212 "num_base_bdevs": 2, 00:13:42.212 "num_base_bdevs_discovered": 1, 00:13:42.212 "num_base_bdevs_operational": 1, 00:13:42.212 "base_bdevs_list": [ 00:13:42.212 { 00:13:42.212 "name": null, 00:13:42.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.212 "is_configured": false, 00:13:42.212 "data_offset": 0, 00:13:42.212 "data_size": 63488 00:13:42.212 }, 00:13:42.212 { 00:13:42.212 "name": "BaseBdev2", 00:13:42.212 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:42.212 "is_configured": true, 00:13:42.212 "data_offset": 2048, 00:13:42.212 "data_size": 63488 00:13:42.212 } 00:13:42.212 ] 00:13:42.212 }' 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.212 09:58:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.476 136.50 IOPS, 409.50 MiB/s [2024-10-21T09:58:19.071Z] 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.476 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.476 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.476 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.476 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.476 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.476 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.476 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.476 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.735 "name": "raid_bdev1", 00:13:42.735 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:42.735 "strip_size_kb": 0, 00:13:42.735 "state": "online", 00:13:42.735 "raid_level": "raid1", 00:13:42.735 "superblock": true, 00:13:42.735 "num_base_bdevs": 2, 00:13:42.735 "num_base_bdevs_discovered": 1, 00:13:42.735 "num_base_bdevs_operational": 1, 00:13:42.735 "base_bdevs_list": [ 00:13:42.735 { 00:13:42.735 "name": null, 00:13:42.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.735 "is_configured": false, 00:13:42.735 "data_offset": 0, 00:13:42.735 "data_size": 63488 00:13:42.735 }, 00:13:42.735 { 00:13:42.735 "name": "BaseBdev2", 00:13:42.735 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:42.735 "is_configured": true, 00:13:42.735 "data_offset": 2048, 00:13:42.735 "data_size": 63488 00:13:42.735 } 00:13:42.735 ] 00:13:42.735 }' 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.735 [2024-10-21 09:58:19.230963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.735 09:58:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:42.735 [2024-10-21 09:58:19.311042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:42.735 [2024-10-21 09:58:19.313349] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.995 [2024-10-21 09:58:19.439441] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.995 [2024-10-21 09:58:19.440268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.995 [2024-10-21 09:58:19.568343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.995 [2024-10-21 09:58:19.568890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:43.254 [2024-10-21 09:58:19.798041] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:43.514 157.00 IOPS, 471.00 MiB/s [2024-10-21T09:58:20.109Z] [2024-10-21 09:58:19.913370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:43.514 [2024-10-21 09:58:19.913796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:43.773 [2024-10-21 09:58:20.263389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.773 "name": "raid_bdev1", 00:13:43.773 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:43.773 "strip_size_kb": 0, 00:13:43.773 "state": "online", 00:13:43.773 "raid_level": "raid1", 00:13:43.773 "superblock": true, 00:13:43.773 "num_base_bdevs": 2, 00:13:43.773 "num_base_bdevs_discovered": 2, 00:13:43.773 "num_base_bdevs_operational": 2, 00:13:43.773 "process": { 00:13:43.773 "type": "rebuild", 00:13:43.773 "target": "spare", 00:13:43.773 "progress": { 00:13:43.773 "blocks": 14336, 00:13:43.773 "percent": 22 00:13:43.773 } 00:13:43.773 }, 00:13:43.773 "base_bdevs_list": [ 00:13:43.773 { 00:13:43.773 "name": "spare", 00:13:43.773 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:43.773 "is_configured": true, 00:13:43.773 "data_offset": 2048, 00:13:43.773 "data_size": 63488 00:13:43.773 }, 00:13:43.773 { 00:13:43.773 "name": "BaseBdev2", 00:13:43.773 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:43.773 "is_configured": true, 00:13:43.773 "data_offset": 2048, 00:13:43.773 "data_size": 63488 00:13:43.773 } 00:13:43.773 ] 00:13:43.773 }' 00:13:43.773 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:44.033 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=427 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.033 "name": "raid_bdev1", 00:13:44.033 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:44.033 "strip_size_kb": 0, 00:13:44.033 "state": "online", 00:13:44.033 "raid_level": "raid1", 00:13:44.033 "superblock": true, 00:13:44.033 "num_base_bdevs": 2, 00:13:44.033 "num_base_bdevs_discovered": 2, 00:13:44.033 "num_base_bdevs_operational": 2, 00:13:44.033 "process": { 00:13:44.033 "type": "rebuild", 00:13:44.033 "target": "spare", 00:13:44.033 "progress": { 00:13:44.033 "blocks": 14336, 00:13:44.033 "percent": 22 00:13:44.033 } 00:13:44.033 }, 00:13:44.033 "base_bdevs_list": [ 00:13:44.033 { 00:13:44.033 "name": "spare", 00:13:44.033 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:44.033 "is_configured": true, 00:13:44.033 "data_offset": 2048, 00:13:44.033 "data_size": 63488 00:13:44.033 }, 00:13:44.033 { 00:13:44.033 "name": "BaseBdev2", 00:13:44.033 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:44.033 "is_configured": true, 00:13:44.033 "data_offset": 2048, 00:13:44.033 "data_size": 63488 00:13:44.033 } 00:13:44.033 ] 00:13:44.033 }' 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.033 [2024-10-21 09:58:20.482350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.033 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.034 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.034 09:58:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:44.293 [2024-10-21 09:58:20.837983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:44.553 131.50 IOPS, 394.50 MiB/s [2024-10-21T09:58:21.148Z] [2024-10-21 09:58:20.970962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:44.553 [2024-10-21 09:58:20.971483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:44.813 [2024-10-21 09:58:21.208618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:44.813 [2024-10-21 09:58:21.319010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:44.813 [2024-10-21 09:58:21.319630] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.073 "name": "raid_bdev1", 00:13:45.073 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:45.073 "strip_size_kb": 0, 00:13:45.073 "state": "online", 00:13:45.073 "raid_level": "raid1", 00:13:45.073 "superblock": true, 00:13:45.073 "num_base_bdevs": 2, 00:13:45.073 "num_base_bdevs_discovered": 2, 00:13:45.073 "num_base_bdevs_operational": 2, 00:13:45.073 "process": { 00:13:45.073 "type": "rebuild", 00:13:45.073 "target": "spare", 00:13:45.073 "progress": { 00:13:45.073 "blocks": 30720, 00:13:45.073 "percent": 48 00:13:45.073 } 00:13:45.073 }, 00:13:45.073 "base_bdevs_list": [ 00:13:45.073 { 00:13:45.073 "name": "spare", 00:13:45.073 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:45.073 "is_configured": true, 00:13:45.073 "data_offset": 2048, 00:13:45.073 "data_size": 63488 00:13:45.073 }, 00:13:45.073 { 00:13:45.073 "name": "BaseBdev2", 00:13:45.073 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:45.073 "is_configured": true, 00:13:45.073 "data_offset": 2048, 00:13:45.073 "data_size": 63488 00:13:45.073 } 00:13:45.073 ] 00:13:45.073 }' 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.073 [2024-10-21 09:58:21.656931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.073 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.333 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.333 09:58:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.333 [2024-10-21 09:58:21.867142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:45.903 115.20 IOPS, 345.60 MiB/s [2024-10-21T09:58:22.498Z] [2024-10-21 09:58:22.310038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.163 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.163 [2024-10-21 09:58:22.748769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:46.423 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.423 "name": "raid_bdev1", 00:13:46.423 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:46.423 "strip_size_kb": 0, 00:13:46.423 "state": "online", 00:13:46.423 "raid_level": "raid1", 00:13:46.423 "superblock": true, 00:13:46.423 "num_base_bdevs": 2, 00:13:46.423 "num_base_bdevs_discovered": 2, 00:13:46.423 "num_base_bdevs_operational": 2, 00:13:46.423 "process": { 00:13:46.423 "type": "rebuild", 00:13:46.423 "target": "spare", 00:13:46.423 "progress": { 00:13:46.423 "blocks": 45056, 00:13:46.423 "percent": 70 00:13:46.423 } 00:13:46.423 }, 00:13:46.423 "base_bdevs_list": [ 00:13:46.423 { 00:13:46.423 "name": "spare", 00:13:46.423 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:46.423 "is_configured": true, 00:13:46.423 "data_offset": 2048, 00:13:46.423 "data_size": 63488 00:13:46.423 }, 00:13:46.423 { 00:13:46.423 "name": "BaseBdev2", 00:13:46.423 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:46.423 "is_configured": true, 00:13:46.423 "data_offset": 2048, 00:13:46.423 "data_size": 63488 00:13:46.423 } 00:13:46.423 ] 00:13:46.423 }' 00:13:46.423 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.423 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.423 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.423 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.423 09:58:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.993 103.00 IOPS, 309.00 MiB/s [2024-10-21T09:58:23.588Z] [2024-10-21 09:58:23.322500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:47.253 [2024-10-21 09:58:23.763588] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:47.253 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.253 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.253 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.513 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.513 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.513 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.513 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.513 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.513 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.513 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.513 [2024-10-21 09:58:23.869549] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:47.513 [2024-10-21 09:58:23.873113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.513 94.14 IOPS, 282.43 MiB/s [2024-10-21T09:58:24.108Z] 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.513 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.513 "name": "raid_bdev1", 00:13:47.513 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:47.513 "strip_size_kb": 0, 00:13:47.513 "state": "online", 00:13:47.513 "raid_level": "raid1", 00:13:47.513 "superblock": true, 00:13:47.513 "num_base_bdevs": 2, 00:13:47.513 "num_base_bdevs_discovered": 2, 00:13:47.513 "num_base_bdevs_operational": 2, 00:13:47.513 "process": { 00:13:47.513 "type": "rebuild", 00:13:47.513 "target": "spare", 00:13:47.513 "progress": { 00:13:47.513 "blocks": 63488, 00:13:47.513 "percent": 100 00:13:47.513 } 00:13:47.513 }, 00:13:47.513 "base_bdevs_list": [ 00:13:47.513 { 00:13:47.513 "name": "spare", 00:13:47.513 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:47.513 "is_configured": true, 00:13:47.513 "data_offset": 2048, 00:13:47.513 "data_size": 63488 00:13:47.513 }, 00:13:47.513 { 00:13:47.513 "name": "BaseBdev2", 00:13:47.514 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:47.514 "is_configured": true, 00:13:47.514 "data_offset": 2048, 00:13:47.514 "data_size": 63488 00:13:47.514 } 00:13:47.514 ] 00:13:47.514 }' 00:13:47.514 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.514 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.514 09:58:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.514 09:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.514 09:58:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.453 86.38 IOPS, 259.12 MiB/s [2024-10-21T09:58:25.048Z] 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.453 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.453 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.453 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.453 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.453 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.453 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.453 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.453 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.453 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.712 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.713 "name": "raid_bdev1", 00:13:48.713 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:48.713 "strip_size_kb": 0, 00:13:48.713 "state": "online", 00:13:48.713 "raid_level": "raid1", 00:13:48.713 "superblock": true, 00:13:48.713 "num_base_bdevs": 2, 00:13:48.713 "num_base_bdevs_discovered": 2, 00:13:48.713 "num_base_bdevs_operational": 2, 00:13:48.713 "base_bdevs_list": [ 00:13:48.713 { 00:13:48.713 "name": "spare", 00:13:48.713 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:48.713 "is_configured": true, 00:13:48.713 "data_offset": 2048, 00:13:48.713 "data_size": 63488 00:13:48.713 }, 00:13:48.713 { 00:13:48.713 "name": "BaseBdev2", 00:13:48.713 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:48.713 "is_configured": true, 00:13:48.713 "data_offset": 2048, 00:13:48.713 "data_size": 63488 00:13:48.713 } 00:13:48.713 ] 00:13:48.713 }' 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.713 "name": "raid_bdev1", 00:13:48.713 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:48.713 "strip_size_kb": 0, 00:13:48.713 "state": "online", 00:13:48.713 "raid_level": "raid1", 00:13:48.713 "superblock": true, 00:13:48.713 "num_base_bdevs": 2, 00:13:48.713 "num_base_bdevs_discovered": 2, 00:13:48.713 "num_base_bdevs_operational": 2, 00:13:48.713 "base_bdevs_list": [ 00:13:48.713 { 00:13:48.713 "name": "spare", 00:13:48.713 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:48.713 "is_configured": true, 00:13:48.713 "data_offset": 2048, 00:13:48.713 "data_size": 63488 00:13:48.713 }, 00:13:48.713 { 00:13:48.713 "name": "BaseBdev2", 00:13:48.713 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:48.713 "is_configured": true, 00:13:48.713 "data_offset": 2048, 00:13:48.713 "data_size": 63488 00:13:48.713 } 00:13:48.713 ] 00:13:48.713 }' 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.713 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.973 "name": "raid_bdev1", 00:13:48.973 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:48.973 "strip_size_kb": 0, 00:13:48.973 "state": "online", 00:13:48.973 "raid_level": "raid1", 00:13:48.973 "superblock": true, 00:13:48.973 "num_base_bdevs": 2, 00:13:48.973 "num_base_bdevs_discovered": 2, 00:13:48.973 "num_base_bdevs_operational": 2, 00:13:48.973 "base_bdevs_list": [ 00:13:48.973 { 00:13:48.973 "name": "spare", 00:13:48.973 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:48.973 "is_configured": true, 00:13:48.973 "data_offset": 2048, 00:13:48.973 "data_size": 63488 00:13:48.973 }, 00:13:48.973 { 00:13:48.973 "name": "BaseBdev2", 00:13:48.973 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:48.973 "is_configured": true, 00:13:48.973 "data_offset": 2048, 00:13:48.973 "data_size": 63488 00:13:48.973 } 00:13:48.973 ] 00:13:48.973 }' 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.973 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.233 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.233 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.233 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.233 [2024-10-21 09:58:25.789939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.233 [2024-10-21 09:58:25.790082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.493 00:13:49.493 Latency(us) 00:13:49.493 [2024-10-21T09:58:26.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.493 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:49.493 raid_bdev1 : 8.96 80.83 242.50 0.00 0.00 17182.09 327.68 113975.65 00:13:49.493 [2024-10-21T09:58:26.088Z] =================================================================================================================== 00:13:49.493 [2024-10-21T09:58:26.088Z] Total : 80.83 242.50 0.00 0.00 17182.09 327.68 113975.65 00:13:49.493 [2024-10-21 09:58:25.856604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.493 [2024-10-21 09:58:25.856726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.493 [2024-10-21 09:58:25.856854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.493 [2024-10-21 09:58:25.856925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:13:49.493 { 00:13:49.493 "results": [ 00:13:49.493 { 00:13:49.493 "job": "raid_bdev1", 00:13:49.493 "core_mask": "0x1", 00:13:49.493 "workload": "randrw", 00:13:49.493 "percentage": 50, 00:13:49.493 "status": "finished", 00:13:49.493 "queue_depth": 2, 00:13:49.493 "io_size": 3145728, 00:13:49.493 "runtime": 8.956792, 00:13:49.493 "iops": 80.83251235486992, 00:13:49.493 "mibps": 242.49753706460976, 00:13:49.493 "io_failed": 0, 00:13:49.493 "io_timeout": 0, 00:13:49.493 "avg_latency_us": 17182.085380735047, 00:13:49.493 "min_latency_us": 327.68, 00:13:49.493 "max_latency_us": 113975.65217391304 00:13:49.493 } 00:13:49.493 ], 00:13:49.493 "core_count": 1 00:13:49.493 } 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.493 09:58:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:49.753 /dev/nbd0 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.753 1+0 records in 00:13:49.753 1+0 records out 00:13:49.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743676 s, 5.5 MB/s 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.753 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:50.013 /dev/nbd1 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.013 1+0 records in 00:13:50.013 1+0 records out 00:13:50.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508241 s, 8.1 MB/s 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.013 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.273 09:58:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.533 [2024-10-21 09:58:27.081060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:50.533 [2024-10-21 09:58:27.081125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.533 [2024-10-21 09:58:27.081150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:50.533 [2024-10-21 09:58:27.081163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.533 [2024-10-21 09:58:27.084019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.533 [2024-10-21 09:58:27.084071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:50.533 [2024-10-21 09:58:27.084177] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:50.533 [2024-10-21 09:58:27.084245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:50.533 [2024-10-21 09:58:27.084412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.533 spare 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.533 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.793 [2024-10-21 09:58:27.184774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:13:50.793 [2024-10-21 09:58:27.184808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.793 [2024-10-21 09:58:27.185157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:13:50.793 [2024-10-21 09:58:27.185356] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:13:50.793 [2024-10-21 09:58:27.185372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005f00 00:13:50.793 [2024-10-21 09:58:27.185642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.793 "name": "raid_bdev1", 00:13:50.793 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:50.793 "strip_size_kb": 0, 00:13:50.793 "state": "online", 00:13:50.793 "raid_level": "raid1", 00:13:50.793 "superblock": true, 00:13:50.793 "num_base_bdevs": 2, 00:13:50.793 "num_base_bdevs_discovered": 2, 00:13:50.793 "num_base_bdevs_operational": 2, 00:13:50.793 "base_bdevs_list": [ 00:13:50.793 { 00:13:50.793 "name": "spare", 00:13:50.793 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:50.793 "is_configured": true, 00:13:50.793 "data_offset": 2048, 00:13:50.793 "data_size": 63488 00:13:50.793 }, 00:13:50.793 { 00:13:50.793 "name": "BaseBdev2", 00:13:50.793 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:50.793 "is_configured": true, 00:13:50.793 "data_offset": 2048, 00:13:50.793 "data_size": 63488 00:13:50.793 } 00:13:50.793 ] 00:13:50.793 }' 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.793 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.362 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.362 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.362 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.363 "name": "raid_bdev1", 00:13:51.363 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:51.363 "strip_size_kb": 0, 00:13:51.363 "state": "online", 00:13:51.363 "raid_level": "raid1", 00:13:51.363 "superblock": true, 00:13:51.363 "num_base_bdevs": 2, 00:13:51.363 "num_base_bdevs_discovered": 2, 00:13:51.363 "num_base_bdevs_operational": 2, 00:13:51.363 "base_bdevs_list": [ 00:13:51.363 { 00:13:51.363 "name": "spare", 00:13:51.363 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:51.363 "is_configured": true, 00:13:51.363 "data_offset": 2048, 00:13:51.363 "data_size": 63488 00:13:51.363 }, 00:13:51.363 { 00:13:51.363 "name": "BaseBdev2", 00:13:51.363 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:51.363 "is_configured": true, 00:13:51.363 "data_offset": 2048, 00:13:51.363 "data_size": 63488 00:13:51.363 } 00:13:51.363 ] 00:13:51.363 }' 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.363 [2024-10-21 09:58:27.879494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.363 "name": "raid_bdev1", 00:13:51.363 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:51.363 "strip_size_kb": 0, 00:13:51.363 "state": "online", 00:13:51.363 "raid_level": "raid1", 00:13:51.363 "superblock": true, 00:13:51.363 "num_base_bdevs": 2, 00:13:51.363 "num_base_bdevs_discovered": 1, 00:13:51.363 "num_base_bdevs_operational": 1, 00:13:51.363 "base_bdevs_list": [ 00:13:51.363 { 00:13:51.363 "name": null, 00:13:51.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.363 "is_configured": false, 00:13:51.363 "data_offset": 0, 00:13:51.363 "data_size": 63488 00:13:51.363 }, 00:13:51.363 { 00:13:51.363 "name": "BaseBdev2", 00:13:51.363 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:51.363 "is_configured": true, 00:13:51.363 "data_offset": 2048, 00:13:51.363 "data_size": 63488 00:13:51.363 } 00:13:51.363 ] 00:13:51.363 }' 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.363 09:58:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.933 09:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.933 09:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.934 09:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.934 [2024-10-21 09:58:28.336799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.934 [2024-10-21 09:58:28.337142] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:51.934 [2024-10-21 09:58:28.337210] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:51.934 [2024-10-21 09:58:28.337290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.934 [2024-10-21 09:58:28.357237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:13:51.934 09:58:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.934 09:58:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:51.934 [2024-10-21 09:58:28.359525] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.873 "name": "raid_bdev1", 00:13:52.873 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:52.873 "strip_size_kb": 0, 00:13:52.873 "state": "online", 00:13:52.873 "raid_level": "raid1", 00:13:52.873 "superblock": true, 00:13:52.873 "num_base_bdevs": 2, 00:13:52.873 "num_base_bdevs_discovered": 2, 00:13:52.873 "num_base_bdevs_operational": 2, 00:13:52.873 "process": { 00:13:52.873 "type": "rebuild", 00:13:52.873 "target": "spare", 00:13:52.873 "progress": { 00:13:52.873 "blocks": 20480, 00:13:52.873 "percent": 32 00:13:52.873 } 00:13:52.873 }, 00:13:52.873 "base_bdevs_list": [ 00:13:52.873 { 00:13:52.873 "name": "spare", 00:13:52.873 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:52.873 "is_configured": true, 00:13:52.873 "data_offset": 2048, 00:13:52.873 "data_size": 63488 00:13:52.873 }, 00:13:52.873 { 00:13:52.873 "name": "BaseBdev2", 00:13:52.873 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:52.873 "is_configured": true, 00:13:52.873 "data_offset": 2048, 00:13:52.873 "data_size": 63488 00:13:52.873 } 00:13:52.873 ] 00:13:52.873 }' 00:13:52.873 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.136 [2024-10-21 09:58:29.524061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.136 [2024-10-21 09:58:29.573993] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:53.136 [2024-10-21 09:58:29.574062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.136 [2024-10-21 09:58:29.574087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.136 [2024-10-21 09:58:29.574095] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.136 "name": "raid_bdev1", 00:13:53.136 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:53.136 "strip_size_kb": 0, 00:13:53.136 "state": "online", 00:13:53.136 "raid_level": "raid1", 00:13:53.136 "superblock": true, 00:13:53.136 "num_base_bdevs": 2, 00:13:53.136 "num_base_bdevs_discovered": 1, 00:13:53.136 "num_base_bdevs_operational": 1, 00:13:53.136 "base_bdevs_list": [ 00:13:53.136 { 00:13:53.136 "name": null, 00:13:53.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.136 "is_configured": false, 00:13:53.136 "data_offset": 0, 00:13:53.136 "data_size": 63488 00:13:53.136 }, 00:13:53.136 { 00:13:53.136 "name": "BaseBdev2", 00:13:53.136 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:53.136 "is_configured": true, 00:13:53.136 "data_offset": 2048, 00:13:53.136 "data_size": 63488 00:13:53.136 } 00:13:53.136 ] 00:13:53.136 }' 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.136 09:58:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.707 09:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:53.707 09:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.707 09:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.707 [2024-10-21 09:58:30.078564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:53.707 [2024-10-21 09:58:30.078727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.707 [2024-10-21 09:58:30.078779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:53.707 [2024-10-21 09:58:30.078829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.707 [2024-10-21 09:58:30.079473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.707 [2024-10-21 09:58:30.079556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:53.707 [2024-10-21 09:58:30.079785] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:53.707 [2024-10-21 09:58:30.079836] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:53.707 [2024-10-21 09:58:30.079891] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:53.707 [2024-10-21 09:58:30.079966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.707 [2024-10-21 09:58:30.099794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:13:53.707 spare 00:13:53.707 09:58:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.707 09:58:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:53.707 [2024-10-21 09:58:30.102080] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.647 "name": "raid_bdev1", 00:13:54.647 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:54.647 "strip_size_kb": 0, 00:13:54.647 "state": "online", 00:13:54.647 "raid_level": "raid1", 00:13:54.647 "superblock": true, 00:13:54.647 "num_base_bdevs": 2, 00:13:54.647 "num_base_bdevs_discovered": 2, 00:13:54.647 "num_base_bdevs_operational": 2, 00:13:54.647 "process": { 00:13:54.647 "type": "rebuild", 00:13:54.647 "target": "spare", 00:13:54.647 "progress": { 00:13:54.647 "blocks": 20480, 00:13:54.647 "percent": 32 00:13:54.647 } 00:13:54.647 }, 00:13:54.647 "base_bdevs_list": [ 00:13:54.647 { 00:13:54.647 "name": "spare", 00:13:54.647 "uuid": "3c182a64-a1c3-5cb9-961e-7847ed87cd54", 00:13:54.647 "is_configured": true, 00:13:54.647 "data_offset": 2048, 00:13:54.647 "data_size": 63488 00:13:54.647 }, 00:13:54.647 { 00:13:54.647 "name": "BaseBdev2", 00:13:54.647 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:54.647 "is_configured": true, 00:13:54.647 "data_offset": 2048, 00:13:54.647 "data_size": 63488 00:13:54.647 } 00:13:54.647 ] 00:13:54.647 }' 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.647 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.907 [2024-10-21 09:58:31.258885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.907 [2024-10-21 09:58:31.316687] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.907 [2024-10-21 09:58:31.316761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.907 [2024-10-21 09:58:31.316777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.907 [2024-10-21 09:58:31.316788] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.907 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.907 "name": "raid_bdev1", 00:13:54.908 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:54.908 "strip_size_kb": 0, 00:13:54.908 "state": "online", 00:13:54.908 "raid_level": "raid1", 00:13:54.908 "superblock": true, 00:13:54.908 "num_base_bdevs": 2, 00:13:54.908 "num_base_bdevs_discovered": 1, 00:13:54.908 "num_base_bdevs_operational": 1, 00:13:54.908 "base_bdevs_list": [ 00:13:54.908 { 00:13:54.908 "name": null, 00:13:54.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.908 "is_configured": false, 00:13:54.908 "data_offset": 0, 00:13:54.908 "data_size": 63488 00:13:54.908 }, 00:13:54.908 { 00:13:54.908 "name": "BaseBdev2", 00:13:54.908 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:54.908 "is_configured": true, 00:13:54.908 "data_offset": 2048, 00:13:54.908 "data_size": 63488 00:13:54.908 } 00:13:54.908 ] 00:13:54.908 }' 00:13:54.908 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.908 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.477 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.477 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.477 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.477 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.477 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.478 "name": "raid_bdev1", 00:13:55.478 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:55.478 "strip_size_kb": 0, 00:13:55.478 "state": "online", 00:13:55.478 "raid_level": "raid1", 00:13:55.478 "superblock": true, 00:13:55.478 "num_base_bdevs": 2, 00:13:55.478 "num_base_bdevs_discovered": 1, 00:13:55.478 "num_base_bdevs_operational": 1, 00:13:55.478 "base_bdevs_list": [ 00:13:55.478 { 00:13:55.478 "name": null, 00:13:55.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.478 "is_configured": false, 00:13:55.478 "data_offset": 0, 00:13:55.478 "data_size": 63488 00:13:55.478 }, 00:13:55.478 { 00:13:55.478 "name": "BaseBdev2", 00:13:55.478 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:55.478 "is_configured": true, 00:13:55.478 "data_offset": 2048, 00:13:55.478 "data_size": 63488 00:13:55.478 } 00:13:55.478 ] 00:13:55.478 }' 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.478 09:58:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.478 [2024-10-21 09:58:31.997537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:55.478 [2024-10-21 09:58:31.997625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.478 [2024-10-21 09:58:31.997652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:55.478 [2024-10-21 09:58:31.997664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.478 [2024-10-21 09:58:31.998195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.478 [2024-10-21 09:58:31.998216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.478 [2024-10-21 09:58:31.998312] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:55.478 [2024-10-21 09:58:31.998331] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:55.478 [2024-10-21 09:58:31.998340] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:55.478 [2024-10-21 09:58:31.998355] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:55.478 BaseBdev1 00:13:55.478 09:58:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.478 09:58:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:56.417 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.417 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.417 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.417 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.417 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.417 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.417 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.417 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.417 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.417 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.678 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.678 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.678 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.678 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.678 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.678 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.678 "name": "raid_bdev1", 00:13:56.678 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:56.678 "strip_size_kb": 0, 00:13:56.678 "state": "online", 00:13:56.678 "raid_level": "raid1", 00:13:56.678 "superblock": true, 00:13:56.678 "num_base_bdevs": 2, 00:13:56.678 "num_base_bdevs_discovered": 1, 00:13:56.678 "num_base_bdevs_operational": 1, 00:13:56.678 "base_bdevs_list": [ 00:13:56.678 { 00:13:56.678 "name": null, 00:13:56.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.678 "is_configured": false, 00:13:56.678 "data_offset": 0, 00:13:56.678 "data_size": 63488 00:13:56.678 }, 00:13:56.678 { 00:13:56.678 "name": "BaseBdev2", 00:13:56.678 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:56.678 "is_configured": true, 00:13:56.678 "data_offset": 2048, 00:13:56.678 "data_size": 63488 00:13:56.678 } 00:13:56.678 ] 00:13:56.678 }' 00:13:56.678 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.678 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.939 "name": "raid_bdev1", 00:13:56.939 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:56.939 "strip_size_kb": 0, 00:13:56.939 "state": "online", 00:13:56.939 "raid_level": "raid1", 00:13:56.939 "superblock": true, 00:13:56.939 "num_base_bdevs": 2, 00:13:56.939 "num_base_bdevs_discovered": 1, 00:13:56.939 "num_base_bdevs_operational": 1, 00:13:56.939 "base_bdevs_list": [ 00:13:56.939 { 00:13:56.939 "name": null, 00:13:56.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.939 "is_configured": false, 00:13:56.939 "data_offset": 0, 00:13:56.939 "data_size": 63488 00:13:56.939 }, 00:13:56.939 { 00:13:56.939 "name": "BaseBdev2", 00:13:56.939 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:56.939 "is_configured": true, 00:13:56.939 "data_offset": 2048, 00:13:56.939 "data_size": 63488 00:13:56.939 } 00:13:56.939 ] 00:13:56.939 }' 00:13:56.939 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.199 [2024-10-21 09:58:33.606010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.199 [2024-10-21 09:58:33.606284] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:57.199 [2024-10-21 09:58:33.606367] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:57.199 request: 00:13:57.199 { 00:13:57.199 "base_bdev": "BaseBdev1", 00:13:57.199 "raid_bdev": "raid_bdev1", 00:13:57.199 "method": "bdev_raid_add_base_bdev", 00:13:57.199 "req_id": 1 00:13:57.199 } 00:13:57.199 Got JSON-RPC error response 00:13:57.199 response: 00:13:57.199 { 00:13:57.199 "code": -22, 00:13:57.199 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:57.199 } 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:57.199 09:58:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.138 "name": "raid_bdev1", 00:13:58.138 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:58.138 "strip_size_kb": 0, 00:13:58.138 "state": "online", 00:13:58.138 "raid_level": "raid1", 00:13:58.138 "superblock": true, 00:13:58.138 "num_base_bdevs": 2, 00:13:58.138 "num_base_bdevs_discovered": 1, 00:13:58.138 "num_base_bdevs_operational": 1, 00:13:58.138 "base_bdevs_list": [ 00:13:58.138 { 00:13:58.138 "name": null, 00:13:58.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.138 "is_configured": false, 00:13:58.138 "data_offset": 0, 00:13:58.138 "data_size": 63488 00:13:58.138 }, 00:13:58.138 { 00:13:58.138 "name": "BaseBdev2", 00:13:58.138 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:58.138 "is_configured": true, 00:13:58.138 "data_offset": 2048, 00:13:58.138 "data_size": 63488 00:13:58.138 } 00:13:58.138 ] 00:13:58.138 }' 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.138 09:58:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.708 "name": "raid_bdev1", 00:13:58.708 "uuid": "f2adce9d-628d-4b70-bef9-3f0053161070", 00:13:58.708 "strip_size_kb": 0, 00:13:58.708 "state": "online", 00:13:58.708 "raid_level": "raid1", 00:13:58.708 "superblock": true, 00:13:58.708 "num_base_bdevs": 2, 00:13:58.708 "num_base_bdevs_discovered": 1, 00:13:58.708 "num_base_bdevs_operational": 1, 00:13:58.708 "base_bdevs_list": [ 00:13:58.708 { 00:13:58.708 "name": null, 00:13:58.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.708 "is_configured": false, 00:13:58.708 "data_offset": 0, 00:13:58.708 "data_size": 63488 00:13:58.708 }, 00:13:58.708 { 00:13:58.708 "name": "BaseBdev2", 00:13:58.708 "uuid": "124d6632-a2b8-595f-93f8-04444f561749", 00:13:58.708 "is_configured": true, 00:13:58.708 "data_offset": 2048, 00:13:58.708 "data_size": 63488 00:13:58.708 } 00:13:58.708 ] 00:13:58.708 }' 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76473 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 76473 ']' 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 76473 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76473 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.708 killing process with pid 76473 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76473' 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 76473 00:13:58.708 Received shutdown signal, test time was about 18.359379 seconds 00:13:58.708 00:13:58.708 Latency(us) 00:13:58.708 [2024-10-21T09:58:35.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.708 [2024-10-21T09:58:35.303Z] =================================================================================================================== 00:13:58.708 [2024-10-21T09:58:35.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:58.708 [2024-10-21 09:58:35.259476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:58.708 09:58:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 76473 00:13:58.708 [2024-10-21 09:58:35.259664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.708 [2024-10-21 09:58:35.259734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.708 [2024-10-21 09:58:35.259761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state offline 00:13:58.968 [2024-10-21 09:58:35.514956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:00.350 ************************************ 00:14:00.350 END TEST raid_rebuild_test_sb_io 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:00.350 00:14:00.350 real 0m21.729s 00:14:00.350 user 0m27.973s 00:14:00.350 sys 0m2.494s 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.350 ************************************ 00:14:00.350 09:58:36 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:00.350 09:58:36 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:00.350 09:58:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:00.350 09:58:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.350 09:58:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:00.350 ************************************ 00:14:00.350 START TEST raid_rebuild_test 00:14:00.350 ************************************ 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77182 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77182 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 77182 ']' 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.350 09:58:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.610 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:00.610 Zero copy mechanism will not be used. 00:14:00.610 [2024-10-21 09:58:36.964567] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:14:00.610 [2024-10-21 09:58:36.964710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77182 ] 00:14:00.610 [2024-10-21 09:58:37.135251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.870 [2024-10-21 09:58:37.277301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.129 [2024-10-21 09:58:37.519594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.129 [2024-10-21 09:58:37.519779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.389 BaseBdev1_malloc 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.389 [2024-10-21 09:58:37.852970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:01.389 [2024-10-21 09:58:37.853057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.389 [2024-10-21 09:58:37.853083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:14:01.389 [2024-10-21 09:58:37.853094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.389 [2024-10-21 09:58:37.855488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.389 [2024-10-21 09:58:37.855528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:01.389 BaseBdev1 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.389 BaseBdev2_malloc 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.389 [2024-10-21 09:58:37.916700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:01.389 [2024-10-21 09:58:37.916833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.389 [2024-10-21 09:58:37.916868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:01.389 [2024-10-21 09:58:37.916897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.389 [2024-10-21 09:58:37.919229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.389 [2024-10-21 09:58:37.919323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:01.389 BaseBdev2 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.389 09:58:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.649 BaseBdev3_malloc 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.649 [2024-10-21 09:58:38.012621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:01.649 [2024-10-21 09:58:38.012753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.649 [2024-10-21 09:58:38.012795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:01.649 [2024-10-21 09:58:38.012828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.649 [2024-10-21 09:58:38.015173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.649 [2024-10-21 09:58:38.015254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:01.649 BaseBdev3 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.649 BaseBdev4_malloc 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.649 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.649 [2024-10-21 09:58:38.079183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:01.650 [2024-10-21 09:58:38.079315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.650 [2024-10-21 09:58:38.079352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:01.650 [2024-10-21 09:58:38.079425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.650 [2024-10-21 09:58:38.082079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.650 [2024-10-21 09:58:38.082172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:01.650 BaseBdev4 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.650 spare_malloc 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.650 spare_delay 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.650 [2024-10-21 09:58:38.156924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:01.650 [2024-10-21 09:58:38.157070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.650 [2024-10-21 09:58:38.157097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:01.650 [2024-10-21 09:58:38.157110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.650 [2024-10-21 09:58:38.159478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.650 [2024-10-21 09:58:38.159518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:01.650 spare 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.650 [2024-10-21 09:58:38.168961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.650 [2024-10-21 09:58:38.171179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.650 [2024-10-21 09:58:38.171255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:01.650 [2024-10-21 09:58:38.171306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:01.650 [2024-10-21 09:58:38.171388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:14:01.650 [2024-10-21 09:58:38.171399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:01.650 [2024-10-21 09:58:38.171682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:01.650 [2024-10-21 09:58:38.171876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:14:01.650 [2024-10-21 09:58:38.171887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:14:01.650 [2024-10-21 09:58:38.172052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.650 "name": "raid_bdev1", 00:14:01.650 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:01.650 "strip_size_kb": 0, 00:14:01.650 "state": "online", 00:14:01.650 "raid_level": "raid1", 00:14:01.650 "superblock": false, 00:14:01.650 "num_base_bdevs": 4, 00:14:01.650 "num_base_bdevs_discovered": 4, 00:14:01.650 "num_base_bdevs_operational": 4, 00:14:01.650 "base_bdevs_list": [ 00:14:01.650 { 00:14:01.650 "name": "BaseBdev1", 00:14:01.650 "uuid": "a9f1f09b-6f17-5ac7-b14e-add5207f6ff3", 00:14:01.650 "is_configured": true, 00:14:01.650 "data_offset": 0, 00:14:01.650 "data_size": 65536 00:14:01.650 }, 00:14:01.650 { 00:14:01.650 "name": "BaseBdev2", 00:14:01.650 "uuid": "100f7994-f4d2-5f33-abff-367928ce0412", 00:14:01.650 "is_configured": true, 00:14:01.650 "data_offset": 0, 00:14:01.650 "data_size": 65536 00:14:01.650 }, 00:14:01.650 { 00:14:01.650 "name": "BaseBdev3", 00:14:01.650 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:01.650 "is_configured": true, 00:14:01.650 "data_offset": 0, 00:14:01.650 "data_size": 65536 00:14:01.650 }, 00:14:01.650 { 00:14:01.650 "name": "BaseBdev4", 00:14:01.650 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:01.650 "is_configured": true, 00:14:01.650 "data_offset": 0, 00:14:01.650 "data_size": 65536 00:14:01.650 } 00:14:01.650 ] 00:14:01.650 }' 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.650 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.219 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:02.219 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 [2024-10-21 09:58:38.688473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.219 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:02.220 09:58:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:02.479 [2024-10-21 09:58:38.979700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:02.479 /dev/nbd0 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.479 1+0 records in 00:14:02.479 1+0 records out 00:14:02.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519492 s, 7.9 MB/s 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:02.479 09:58:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:09.059 65536+0 records in 00:14:09.059 65536+0 records out 00:14:09.059 33554432 bytes (34 MB, 32 MiB) copied, 5.70587 s, 5.9 MB/s 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.059 [2024-10-21 09:58:44.989897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.059 09:58:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.059 [2024-10-21 09:58:45.010001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.059 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.060 "name": "raid_bdev1", 00:14:09.060 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:09.060 "strip_size_kb": 0, 00:14:09.060 "state": "online", 00:14:09.060 "raid_level": "raid1", 00:14:09.060 "superblock": false, 00:14:09.060 "num_base_bdevs": 4, 00:14:09.060 "num_base_bdevs_discovered": 3, 00:14:09.060 "num_base_bdevs_operational": 3, 00:14:09.060 "base_bdevs_list": [ 00:14:09.060 { 00:14:09.060 "name": null, 00:14:09.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.060 "is_configured": false, 00:14:09.060 "data_offset": 0, 00:14:09.060 "data_size": 65536 00:14:09.060 }, 00:14:09.060 { 00:14:09.060 "name": "BaseBdev2", 00:14:09.060 "uuid": "100f7994-f4d2-5f33-abff-367928ce0412", 00:14:09.060 "is_configured": true, 00:14:09.060 "data_offset": 0, 00:14:09.060 "data_size": 65536 00:14:09.060 }, 00:14:09.060 { 00:14:09.060 "name": "BaseBdev3", 00:14:09.060 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:09.060 "is_configured": true, 00:14:09.060 "data_offset": 0, 00:14:09.060 "data_size": 65536 00:14:09.060 }, 00:14:09.060 { 00:14:09.060 "name": "BaseBdev4", 00:14:09.060 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:09.060 "is_configured": true, 00:14:09.060 "data_offset": 0, 00:14:09.060 "data_size": 65536 00:14:09.060 } 00:14:09.060 ] 00:14:09.060 }' 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.060 [2024-10-21 09:58:45.485225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.060 [2024-10-21 09:58:45.503599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.060 09:58:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:09.060 [2024-10-21 09:58:45.505834] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.998 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.998 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.998 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.999 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.999 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.999 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.999 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.999 09:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.999 09:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.999 09:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.999 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.999 "name": "raid_bdev1", 00:14:09.999 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:09.999 "strip_size_kb": 0, 00:14:09.999 "state": "online", 00:14:09.999 "raid_level": "raid1", 00:14:09.999 "superblock": false, 00:14:09.999 "num_base_bdevs": 4, 00:14:09.999 "num_base_bdevs_discovered": 4, 00:14:09.999 "num_base_bdevs_operational": 4, 00:14:09.999 "process": { 00:14:09.999 "type": "rebuild", 00:14:09.999 "target": "spare", 00:14:09.999 "progress": { 00:14:09.999 "blocks": 20480, 00:14:09.999 "percent": 31 00:14:09.999 } 00:14:09.999 }, 00:14:09.999 "base_bdevs_list": [ 00:14:09.999 { 00:14:09.999 "name": "spare", 00:14:09.999 "uuid": "a3cf0274-cac0-53a5-bbf0-7eb292691833", 00:14:09.999 "is_configured": true, 00:14:09.999 "data_offset": 0, 00:14:09.999 "data_size": 65536 00:14:09.999 }, 00:14:09.999 { 00:14:09.999 "name": "BaseBdev2", 00:14:09.999 "uuid": "100f7994-f4d2-5f33-abff-367928ce0412", 00:14:09.999 "is_configured": true, 00:14:09.999 "data_offset": 0, 00:14:09.999 "data_size": 65536 00:14:09.999 }, 00:14:09.999 { 00:14:09.999 "name": "BaseBdev3", 00:14:09.999 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:09.999 "is_configured": true, 00:14:09.999 "data_offset": 0, 00:14:09.999 "data_size": 65536 00:14:09.999 }, 00:14:09.999 { 00:14:09.999 "name": "BaseBdev4", 00:14:09.999 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:09.999 "is_configured": true, 00:14:09.999 "data_offset": 0, 00:14:09.999 "data_size": 65536 00:14:09.999 } 00:14:09.999 ] 00:14:09.999 }' 00:14:09.999 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.258 [2024-10-21 09:58:46.674026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.258 [2024-10-21 09:58:46.715782] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:10.258 [2024-10-21 09:58:46.715859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.258 [2024-10-21 09:58:46.715877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.258 [2024-10-21 09:58:46.715887] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.258 "name": "raid_bdev1", 00:14:10.258 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:10.258 "strip_size_kb": 0, 00:14:10.258 "state": "online", 00:14:10.258 "raid_level": "raid1", 00:14:10.258 "superblock": false, 00:14:10.258 "num_base_bdevs": 4, 00:14:10.258 "num_base_bdevs_discovered": 3, 00:14:10.258 "num_base_bdevs_operational": 3, 00:14:10.258 "base_bdevs_list": [ 00:14:10.258 { 00:14:10.258 "name": null, 00:14:10.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.258 "is_configured": false, 00:14:10.258 "data_offset": 0, 00:14:10.258 "data_size": 65536 00:14:10.258 }, 00:14:10.258 { 00:14:10.258 "name": "BaseBdev2", 00:14:10.258 "uuid": "100f7994-f4d2-5f33-abff-367928ce0412", 00:14:10.258 "is_configured": true, 00:14:10.258 "data_offset": 0, 00:14:10.258 "data_size": 65536 00:14:10.258 }, 00:14:10.258 { 00:14:10.258 "name": "BaseBdev3", 00:14:10.258 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:10.258 "is_configured": true, 00:14:10.258 "data_offset": 0, 00:14:10.258 "data_size": 65536 00:14:10.258 }, 00:14:10.258 { 00:14:10.258 "name": "BaseBdev4", 00:14:10.258 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:10.258 "is_configured": true, 00:14:10.258 "data_offset": 0, 00:14:10.258 "data_size": 65536 00:14:10.258 } 00:14:10.258 ] 00:14:10.258 }' 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.258 09:58:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.826 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.826 "name": "raid_bdev1", 00:14:10.826 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:10.826 "strip_size_kb": 0, 00:14:10.826 "state": "online", 00:14:10.826 "raid_level": "raid1", 00:14:10.826 "superblock": false, 00:14:10.826 "num_base_bdevs": 4, 00:14:10.826 "num_base_bdevs_discovered": 3, 00:14:10.826 "num_base_bdevs_operational": 3, 00:14:10.826 "base_bdevs_list": [ 00:14:10.826 { 00:14:10.826 "name": null, 00:14:10.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.826 "is_configured": false, 00:14:10.826 "data_offset": 0, 00:14:10.826 "data_size": 65536 00:14:10.826 }, 00:14:10.826 { 00:14:10.826 "name": "BaseBdev2", 00:14:10.826 "uuid": "100f7994-f4d2-5f33-abff-367928ce0412", 00:14:10.826 "is_configured": true, 00:14:10.826 "data_offset": 0, 00:14:10.826 "data_size": 65536 00:14:10.826 }, 00:14:10.826 { 00:14:10.826 "name": "BaseBdev3", 00:14:10.826 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:10.826 "is_configured": true, 00:14:10.826 "data_offset": 0, 00:14:10.826 "data_size": 65536 00:14:10.826 }, 00:14:10.826 { 00:14:10.826 "name": "BaseBdev4", 00:14:10.827 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:10.827 "is_configured": true, 00:14:10.827 "data_offset": 0, 00:14:10.827 "data_size": 65536 00:14:10.827 } 00:14:10.827 ] 00:14:10.827 }' 00:14:10.827 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.827 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.827 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.827 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.827 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:10.827 09:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.827 09:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.827 [2024-10-21 09:58:47.335392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.827 [2024-10-21 09:58:47.351944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:10.827 09:58:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.827 09:58:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:10.827 [2024-10-21 09:58:47.354124] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:11.767 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.767 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.767 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.767 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.767 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.028 "name": "raid_bdev1", 00:14:12.028 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:12.028 "strip_size_kb": 0, 00:14:12.028 "state": "online", 00:14:12.028 "raid_level": "raid1", 00:14:12.028 "superblock": false, 00:14:12.028 "num_base_bdevs": 4, 00:14:12.028 "num_base_bdevs_discovered": 4, 00:14:12.028 "num_base_bdevs_operational": 4, 00:14:12.028 "process": { 00:14:12.028 "type": "rebuild", 00:14:12.028 "target": "spare", 00:14:12.028 "progress": { 00:14:12.028 "blocks": 20480, 00:14:12.028 "percent": 31 00:14:12.028 } 00:14:12.028 }, 00:14:12.028 "base_bdevs_list": [ 00:14:12.028 { 00:14:12.028 "name": "spare", 00:14:12.028 "uuid": "a3cf0274-cac0-53a5-bbf0-7eb292691833", 00:14:12.028 "is_configured": true, 00:14:12.028 "data_offset": 0, 00:14:12.028 "data_size": 65536 00:14:12.028 }, 00:14:12.028 { 00:14:12.028 "name": "BaseBdev2", 00:14:12.028 "uuid": "100f7994-f4d2-5f33-abff-367928ce0412", 00:14:12.028 "is_configured": true, 00:14:12.028 "data_offset": 0, 00:14:12.028 "data_size": 65536 00:14:12.028 }, 00:14:12.028 { 00:14:12.028 "name": "BaseBdev3", 00:14:12.028 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:12.028 "is_configured": true, 00:14:12.028 "data_offset": 0, 00:14:12.028 "data_size": 65536 00:14:12.028 }, 00:14:12.028 { 00:14:12.028 "name": "BaseBdev4", 00:14:12.028 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:12.028 "is_configured": true, 00:14:12.028 "data_offset": 0, 00:14:12.028 "data_size": 65536 00:14:12.028 } 00:14:12.028 ] 00:14:12.028 }' 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.028 [2024-10-21 09:58:48.514027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.028 [2024-10-21 09:58:48.563705] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09bd0 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.028 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.288 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.288 "name": "raid_bdev1", 00:14:12.288 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:12.288 "strip_size_kb": 0, 00:14:12.288 "state": "online", 00:14:12.288 "raid_level": "raid1", 00:14:12.288 "superblock": false, 00:14:12.288 "num_base_bdevs": 4, 00:14:12.288 "num_base_bdevs_discovered": 3, 00:14:12.288 "num_base_bdevs_operational": 3, 00:14:12.288 "process": { 00:14:12.288 "type": "rebuild", 00:14:12.288 "target": "spare", 00:14:12.288 "progress": { 00:14:12.288 "blocks": 24576, 00:14:12.288 "percent": 37 00:14:12.288 } 00:14:12.288 }, 00:14:12.288 "base_bdevs_list": [ 00:14:12.288 { 00:14:12.288 "name": "spare", 00:14:12.288 "uuid": "a3cf0274-cac0-53a5-bbf0-7eb292691833", 00:14:12.288 "is_configured": true, 00:14:12.288 "data_offset": 0, 00:14:12.288 "data_size": 65536 00:14:12.288 }, 00:14:12.288 { 00:14:12.288 "name": null, 00:14:12.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.288 "is_configured": false, 00:14:12.288 "data_offset": 0, 00:14:12.288 "data_size": 65536 00:14:12.288 }, 00:14:12.288 { 00:14:12.288 "name": "BaseBdev3", 00:14:12.288 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:12.288 "is_configured": true, 00:14:12.288 "data_offset": 0, 00:14:12.288 "data_size": 65536 00:14:12.288 }, 00:14:12.288 { 00:14:12.288 "name": "BaseBdev4", 00:14:12.288 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:12.288 "is_configured": true, 00:14:12.288 "data_offset": 0, 00:14:12.288 "data_size": 65536 00:14:12.288 } 00:14:12.288 ] 00:14:12.288 }' 00:14:12.288 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.288 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.288 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.288 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.288 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=455 00:14:12.288 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.288 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.288 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.289 "name": "raid_bdev1", 00:14:12.289 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:12.289 "strip_size_kb": 0, 00:14:12.289 "state": "online", 00:14:12.289 "raid_level": "raid1", 00:14:12.289 "superblock": false, 00:14:12.289 "num_base_bdevs": 4, 00:14:12.289 "num_base_bdevs_discovered": 3, 00:14:12.289 "num_base_bdevs_operational": 3, 00:14:12.289 "process": { 00:14:12.289 "type": "rebuild", 00:14:12.289 "target": "spare", 00:14:12.289 "progress": { 00:14:12.289 "blocks": 26624, 00:14:12.289 "percent": 40 00:14:12.289 } 00:14:12.289 }, 00:14:12.289 "base_bdevs_list": [ 00:14:12.289 { 00:14:12.289 "name": "spare", 00:14:12.289 "uuid": "a3cf0274-cac0-53a5-bbf0-7eb292691833", 00:14:12.289 "is_configured": true, 00:14:12.289 "data_offset": 0, 00:14:12.289 "data_size": 65536 00:14:12.289 }, 00:14:12.289 { 00:14:12.289 "name": null, 00:14:12.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.289 "is_configured": false, 00:14:12.289 "data_offset": 0, 00:14:12.289 "data_size": 65536 00:14:12.289 }, 00:14:12.289 { 00:14:12.289 "name": "BaseBdev3", 00:14:12.289 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:12.289 "is_configured": true, 00:14:12.289 "data_offset": 0, 00:14:12.289 "data_size": 65536 00:14:12.289 }, 00:14:12.289 { 00:14:12.289 "name": "BaseBdev4", 00:14:12.289 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:12.289 "is_configured": true, 00:14:12.289 "data_offset": 0, 00:14:12.289 "data_size": 65536 00:14:12.289 } 00:14:12.289 ] 00:14:12.289 }' 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.289 09:58:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.668 "name": "raid_bdev1", 00:14:13.668 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:13.668 "strip_size_kb": 0, 00:14:13.668 "state": "online", 00:14:13.668 "raid_level": "raid1", 00:14:13.668 "superblock": false, 00:14:13.668 "num_base_bdevs": 4, 00:14:13.668 "num_base_bdevs_discovered": 3, 00:14:13.668 "num_base_bdevs_operational": 3, 00:14:13.668 "process": { 00:14:13.668 "type": "rebuild", 00:14:13.668 "target": "spare", 00:14:13.668 "progress": { 00:14:13.668 "blocks": 49152, 00:14:13.668 "percent": 75 00:14:13.668 } 00:14:13.668 }, 00:14:13.668 "base_bdevs_list": [ 00:14:13.668 { 00:14:13.668 "name": "spare", 00:14:13.668 "uuid": "a3cf0274-cac0-53a5-bbf0-7eb292691833", 00:14:13.668 "is_configured": true, 00:14:13.668 "data_offset": 0, 00:14:13.668 "data_size": 65536 00:14:13.668 }, 00:14:13.668 { 00:14:13.668 "name": null, 00:14:13.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.668 "is_configured": false, 00:14:13.668 "data_offset": 0, 00:14:13.668 "data_size": 65536 00:14:13.668 }, 00:14:13.668 { 00:14:13.668 "name": "BaseBdev3", 00:14:13.668 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:13.668 "is_configured": true, 00:14:13.668 "data_offset": 0, 00:14:13.668 "data_size": 65536 00:14:13.668 }, 00:14:13.668 { 00:14:13.668 "name": "BaseBdev4", 00:14:13.668 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:13.668 "is_configured": true, 00:14:13.668 "data_offset": 0, 00:14:13.668 "data_size": 65536 00:14:13.668 } 00:14:13.668 ] 00:14:13.668 }' 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.668 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.669 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.669 09:58:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.239 [2024-10-21 09:58:50.579697] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:14.239 [2024-10-21 09:58:50.579779] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:14.239 [2024-10-21 09:58:50.579829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.499 09:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.499 09:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.499 09:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.499 09:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.499 09:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.499 09:58:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.499 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.499 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.499 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.499 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.499 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.499 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.499 "name": "raid_bdev1", 00:14:14.499 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:14.499 "strip_size_kb": 0, 00:14:14.499 "state": "online", 00:14:14.499 "raid_level": "raid1", 00:14:14.499 "superblock": false, 00:14:14.499 "num_base_bdevs": 4, 00:14:14.499 "num_base_bdevs_discovered": 3, 00:14:14.499 "num_base_bdevs_operational": 3, 00:14:14.499 "base_bdevs_list": [ 00:14:14.499 { 00:14:14.499 "name": "spare", 00:14:14.499 "uuid": "a3cf0274-cac0-53a5-bbf0-7eb292691833", 00:14:14.499 "is_configured": true, 00:14:14.499 "data_offset": 0, 00:14:14.499 "data_size": 65536 00:14:14.499 }, 00:14:14.499 { 00:14:14.499 "name": null, 00:14:14.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.499 "is_configured": false, 00:14:14.499 "data_offset": 0, 00:14:14.499 "data_size": 65536 00:14:14.499 }, 00:14:14.499 { 00:14:14.499 "name": "BaseBdev3", 00:14:14.499 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:14.499 "is_configured": true, 00:14:14.499 "data_offset": 0, 00:14:14.499 "data_size": 65536 00:14:14.499 }, 00:14:14.499 { 00:14:14.499 "name": "BaseBdev4", 00:14:14.499 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:14.499 "is_configured": true, 00:14:14.499 "data_offset": 0, 00:14:14.499 "data_size": 65536 00:14:14.499 } 00:14:14.499 ] 00:14:14.499 }' 00:14:14.499 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.759 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.759 "name": "raid_bdev1", 00:14:14.759 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:14.759 "strip_size_kb": 0, 00:14:14.759 "state": "online", 00:14:14.759 "raid_level": "raid1", 00:14:14.759 "superblock": false, 00:14:14.759 "num_base_bdevs": 4, 00:14:14.759 "num_base_bdevs_discovered": 3, 00:14:14.759 "num_base_bdevs_operational": 3, 00:14:14.759 "base_bdevs_list": [ 00:14:14.759 { 00:14:14.759 "name": "spare", 00:14:14.759 "uuid": "a3cf0274-cac0-53a5-bbf0-7eb292691833", 00:14:14.759 "is_configured": true, 00:14:14.759 "data_offset": 0, 00:14:14.759 "data_size": 65536 00:14:14.759 }, 00:14:14.759 { 00:14:14.759 "name": null, 00:14:14.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.759 "is_configured": false, 00:14:14.759 "data_offset": 0, 00:14:14.760 "data_size": 65536 00:14:14.760 }, 00:14:14.760 { 00:14:14.760 "name": "BaseBdev3", 00:14:14.760 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:14.760 "is_configured": true, 00:14:14.760 "data_offset": 0, 00:14:14.760 "data_size": 65536 00:14:14.760 }, 00:14:14.760 { 00:14:14.760 "name": "BaseBdev4", 00:14:14.760 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:14.760 "is_configured": true, 00:14:14.760 "data_offset": 0, 00:14:14.760 "data_size": 65536 00:14:14.760 } 00:14:14.760 ] 00:14:14.760 }' 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.760 "name": "raid_bdev1", 00:14:14.760 "uuid": "7291160c-639a-41b8-9c29-e4debe738d1c", 00:14:14.760 "strip_size_kb": 0, 00:14:14.760 "state": "online", 00:14:14.760 "raid_level": "raid1", 00:14:14.760 "superblock": false, 00:14:14.760 "num_base_bdevs": 4, 00:14:14.760 "num_base_bdevs_discovered": 3, 00:14:14.760 "num_base_bdevs_operational": 3, 00:14:14.760 "base_bdevs_list": [ 00:14:14.760 { 00:14:14.760 "name": "spare", 00:14:14.760 "uuid": "a3cf0274-cac0-53a5-bbf0-7eb292691833", 00:14:14.760 "is_configured": true, 00:14:14.760 "data_offset": 0, 00:14:14.760 "data_size": 65536 00:14:14.760 }, 00:14:14.760 { 00:14:14.760 "name": null, 00:14:14.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.760 "is_configured": false, 00:14:14.760 "data_offset": 0, 00:14:14.760 "data_size": 65536 00:14:14.760 }, 00:14:14.760 { 00:14:14.760 "name": "BaseBdev3", 00:14:14.760 "uuid": "791c7fa6-95ab-57da-932c-0606cd4093a1", 00:14:14.760 "is_configured": true, 00:14:14.760 "data_offset": 0, 00:14:14.760 "data_size": 65536 00:14:14.760 }, 00:14:14.760 { 00:14:14.760 "name": "BaseBdev4", 00:14:14.760 "uuid": "08218d6a-8a6b-565d-962e-cd4be62e539b", 00:14:14.760 "is_configured": true, 00:14:14.760 "data_offset": 0, 00:14:14.760 "data_size": 65536 00:14:14.760 } 00:14:14.760 ] 00:14:14.760 }' 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.760 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.330 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:15.330 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.330 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.330 [2024-10-21 09:58:51.738469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:15.330 [2024-10-21 09:58:51.738515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.330 [2024-10-21 09:58:51.738670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.330 [2024-10-21 09:58:51.738800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.330 [2024-10-21 09:58:51.738812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:14:15.330 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.330 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.330 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:15.330 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.330 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.330 09:58:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:15.331 09:58:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:15.590 /dev/nbd0 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.590 1+0 records in 00:14:15.590 1+0 records out 00:14:15.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391145 s, 10.5 MB/s 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:15.590 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:15.850 /dev/nbd1 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.850 1+0 records in 00:14:15.850 1+0 records out 00:14:15.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497251 s, 8.2 MB/s 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:15.850 09:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:16.110 09:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:16.110 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.110 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:16.110 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.110 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:16.110 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.110 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.370 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:16.631 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:16.631 09:58:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.631 09:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:16.631 09:58:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77182 00:14:16.631 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 77182 ']' 00:14:16.631 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 77182 00:14:16.631 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:16.631 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.631 09:58:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77182 00:14:16.631 09:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.631 killing process with pid 77182 00:14:16.631 Received shutdown signal, test time was about 60.000000 seconds 00:14:16.631 00:14:16.631 Latency(us) 00:14:16.631 [2024-10-21T09:58:53.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.631 [2024-10-21T09:58:53.226Z] =================================================================================================================== 00:14:16.631 [2024-10-21T09:58:53.226Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:16.631 09:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.631 09:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77182' 00:14:16.631 09:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 77182 00:14:16.631 [2024-10-21 09:58:53.016931] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.631 09:58:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 77182 00:14:17.200 [2024-10-21 09:58:53.565389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:18.584 00:14:18.584 real 0m17.923s 00:14:18.584 user 0m19.771s 00:14:18.584 sys 0m3.426s 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.584 ************************************ 00:14:18.584 END TEST raid_rebuild_test 00:14:18.584 ************************************ 00:14:18.584 09:58:54 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:18.584 09:58:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:18.584 09:58:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.584 09:58:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.584 ************************************ 00:14:18.584 START TEST raid_rebuild_test_sb 00:14:18.584 ************************************ 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77629 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77629 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77629 ']' 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.584 09:58:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.584 [2024-10-21 09:58:54.966759] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:14:18.584 [2024-10-21 09:58:54.967025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77629 ] 00:14:18.584 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:18.584 Zero copy mechanism will not be used. 00:14:18.584 [2024-10-21 09:58:55.135367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.847 [2024-10-21 09:58:55.284395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.112 [2024-10-21 09:58:55.535336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.112 [2024-10-21 09:58:55.535505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.371 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.371 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 BaseBdev1_malloc 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 [2024-10-21 09:58:55.906964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:19.372 [2024-10-21 09:58:55.907063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.372 [2024-10-21 09:58:55.907093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:14:19.372 [2024-10-21 09:58:55.907106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.372 [2024-10-21 09:58:55.909749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.372 [2024-10-21 09:58:55.909788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.372 BaseBdev1 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 BaseBdev2_malloc 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.372 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.632 [2024-10-21 09:58:55.972092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:19.632 [2024-10-21 09:58:55.972161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.632 [2024-10-21 09:58:55.972186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:19.632 [2024-10-21 09:58:55.972210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.632 [2024-10-21 09:58:55.974732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.632 [2024-10-21 09:58:55.974774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:19.632 BaseBdev2 00:14:19.632 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.632 09:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.632 09:58:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:19.632 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.632 09:58:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.632 BaseBdev3_malloc 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.632 [2024-10-21 09:58:56.048413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:19.632 [2024-10-21 09:58:56.048471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.632 [2024-10-21 09:58:56.048494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:19.632 [2024-10-21 09:58:56.048506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.632 [2024-10-21 09:58:56.050968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.632 [2024-10-21 09:58:56.051004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:19.632 BaseBdev3 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.632 BaseBdev4_malloc 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.632 [2024-10-21 09:58:56.116039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:19.632 [2024-10-21 09:58:56.116100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.632 [2024-10-21 09:58:56.116124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:19.632 [2024-10-21 09:58:56.116137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.632 [2024-10-21 09:58:56.118657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.632 [2024-10-21 09:58:56.118693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:19.632 BaseBdev4 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.632 spare_malloc 00:14:19.632 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.633 spare_delay 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.633 [2024-10-21 09:58:56.197480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:19.633 [2024-10-21 09:58:56.197547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.633 [2024-10-21 09:58:56.197580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:19.633 [2024-10-21 09:58:56.197592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.633 [2024-10-21 09:58:56.200172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.633 [2024-10-21 09:58:56.200205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:19.633 spare 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.633 [2024-10-21 09:58:56.209505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.633 [2024-10-21 09:58:56.211729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.633 [2024-10-21 09:58:56.211822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.633 [2024-10-21 09:58:56.211881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:19.633 [2024-10-21 09:58:56.212107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:14:19.633 [2024-10-21 09:58:56.212147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:19.633 [2024-10-21 09:58:56.212444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:19.633 [2024-10-21 09:58:56.212697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:14:19.633 [2024-10-21 09:58:56.212717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:14:19.633 [2024-10-21 09:58:56.212901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.633 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.892 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.892 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.892 "name": "raid_bdev1", 00:14:19.892 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:19.892 "strip_size_kb": 0, 00:14:19.892 "state": "online", 00:14:19.892 "raid_level": "raid1", 00:14:19.892 "superblock": true, 00:14:19.892 "num_base_bdevs": 4, 00:14:19.892 "num_base_bdevs_discovered": 4, 00:14:19.892 "num_base_bdevs_operational": 4, 00:14:19.892 "base_bdevs_list": [ 00:14:19.892 { 00:14:19.892 "name": "BaseBdev1", 00:14:19.892 "uuid": "6687b976-e02e-5e8a-8f5e-56fdab37461d", 00:14:19.892 "is_configured": true, 00:14:19.892 "data_offset": 2048, 00:14:19.892 "data_size": 63488 00:14:19.892 }, 00:14:19.892 { 00:14:19.892 "name": "BaseBdev2", 00:14:19.892 "uuid": "dc39e374-5e54-51a7-bec6-7e802cee226e", 00:14:19.892 "is_configured": true, 00:14:19.892 "data_offset": 2048, 00:14:19.892 "data_size": 63488 00:14:19.892 }, 00:14:19.892 { 00:14:19.892 "name": "BaseBdev3", 00:14:19.892 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:19.892 "is_configured": true, 00:14:19.892 "data_offset": 2048, 00:14:19.892 "data_size": 63488 00:14:19.892 }, 00:14:19.892 { 00:14:19.892 "name": "BaseBdev4", 00:14:19.892 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:19.892 "is_configured": true, 00:14:19.892 "data_offset": 2048, 00:14:19.892 "data_size": 63488 00:14:19.892 } 00:14:19.892 ] 00:14:19.892 }' 00:14:19.892 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.892 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.151 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.151 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:20.151 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.151 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.151 [2024-10-21 09:58:56.701048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.151 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.151 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:20.151 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.151 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.151 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.151 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:20.409 09:58:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.409 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:20.409 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:20.409 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:20.409 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:20.409 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:20.409 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.409 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:20.409 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.410 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:20.410 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.410 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:20.410 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.410 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.410 09:58:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:20.410 [2024-10-21 09:58:56.976258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:20.410 /dev/nbd0 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.669 1+0 records in 00:14:20.669 1+0 records out 00:14:20.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398236 s, 10.3 MB/s 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:20.669 09:58:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:25.949 63488+0 records in 00:14:25.949 63488+0 records out 00:14:25.949 32505856 bytes (33 MB, 31 MiB) copied, 5.22765 s, 6.2 MB/s 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.949 [2024-10-21 09:59:02.509834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.949 [2024-10-21 09:59:02.529916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.949 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.950 09:59:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.950 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.950 09:59:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.209 09:59:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.209 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.209 "name": "raid_bdev1", 00:14:26.209 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:26.209 "strip_size_kb": 0, 00:14:26.209 "state": "online", 00:14:26.209 "raid_level": "raid1", 00:14:26.209 "superblock": true, 00:14:26.209 "num_base_bdevs": 4, 00:14:26.209 "num_base_bdevs_discovered": 3, 00:14:26.209 "num_base_bdevs_operational": 3, 00:14:26.209 "base_bdevs_list": [ 00:14:26.209 { 00:14:26.209 "name": null, 00:14:26.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.209 "is_configured": false, 00:14:26.209 "data_offset": 0, 00:14:26.209 "data_size": 63488 00:14:26.209 }, 00:14:26.209 { 00:14:26.209 "name": "BaseBdev2", 00:14:26.209 "uuid": "dc39e374-5e54-51a7-bec6-7e802cee226e", 00:14:26.209 "is_configured": true, 00:14:26.209 "data_offset": 2048, 00:14:26.209 "data_size": 63488 00:14:26.209 }, 00:14:26.209 { 00:14:26.209 "name": "BaseBdev3", 00:14:26.209 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:26.209 "is_configured": true, 00:14:26.209 "data_offset": 2048, 00:14:26.209 "data_size": 63488 00:14:26.209 }, 00:14:26.209 { 00:14:26.209 "name": "BaseBdev4", 00:14:26.209 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:26.209 "is_configured": true, 00:14:26.209 "data_offset": 2048, 00:14:26.209 "data_size": 63488 00:14:26.209 } 00:14:26.209 ] 00:14:26.209 }' 00:14:26.209 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.209 09:59:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.469 09:59:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.469 09:59:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.469 09:59:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.469 [2024-10-21 09:59:02.989192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.469 [2024-10-21 09:59:03.006261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:14:26.469 09:59:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.469 09:59:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:26.469 [2024-10-21 09:59:03.008509] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.425 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.425 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.425 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.425 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.425 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.425 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.425 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.425 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.685 "name": "raid_bdev1", 00:14:27.685 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:27.685 "strip_size_kb": 0, 00:14:27.685 "state": "online", 00:14:27.685 "raid_level": "raid1", 00:14:27.685 "superblock": true, 00:14:27.685 "num_base_bdevs": 4, 00:14:27.685 "num_base_bdevs_discovered": 4, 00:14:27.685 "num_base_bdevs_operational": 4, 00:14:27.685 "process": { 00:14:27.685 "type": "rebuild", 00:14:27.685 "target": "spare", 00:14:27.685 "progress": { 00:14:27.685 "blocks": 20480, 00:14:27.685 "percent": 32 00:14:27.685 } 00:14:27.685 }, 00:14:27.685 "base_bdevs_list": [ 00:14:27.685 { 00:14:27.685 "name": "spare", 00:14:27.685 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:27.685 "is_configured": true, 00:14:27.685 "data_offset": 2048, 00:14:27.685 "data_size": 63488 00:14:27.685 }, 00:14:27.685 { 00:14:27.685 "name": "BaseBdev2", 00:14:27.685 "uuid": "dc39e374-5e54-51a7-bec6-7e802cee226e", 00:14:27.685 "is_configured": true, 00:14:27.685 "data_offset": 2048, 00:14:27.685 "data_size": 63488 00:14:27.685 }, 00:14:27.685 { 00:14:27.685 "name": "BaseBdev3", 00:14:27.685 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:27.685 "is_configured": true, 00:14:27.685 "data_offset": 2048, 00:14:27.685 "data_size": 63488 00:14:27.685 }, 00:14:27.685 { 00:14:27.685 "name": "BaseBdev4", 00:14:27.685 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:27.685 "is_configured": true, 00:14:27.685 "data_offset": 2048, 00:14:27.685 "data_size": 63488 00:14:27.685 } 00:14:27.685 ] 00:14:27.685 }' 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.685 [2024-10-21 09:59:04.172013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.685 [2024-10-21 09:59:04.217584] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.685 [2024-10-21 09:59:04.217648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.685 [2024-10-21 09:59:04.217665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.685 [2024-10-21 09:59:04.217676] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.685 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.686 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.945 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.945 "name": "raid_bdev1", 00:14:27.945 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:27.945 "strip_size_kb": 0, 00:14:27.945 "state": "online", 00:14:27.945 "raid_level": "raid1", 00:14:27.945 "superblock": true, 00:14:27.945 "num_base_bdevs": 4, 00:14:27.945 "num_base_bdevs_discovered": 3, 00:14:27.945 "num_base_bdevs_operational": 3, 00:14:27.945 "base_bdevs_list": [ 00:14:27.945 { 00:14:27.945 "name": null, 00:14:27.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.945 "is_configured": false, 00:14:27.945 "data_offset": 0, 00:14:27.945 "data_size": 63488 00:14:27.945 }, 00:14:27.945 { 00:14:27.945 "name": "BaseBdev2", 00:14:27.945 "uuid": "dc39e374-5e54-51a7-bec6-7e802cee226e", 00:14:27.945 "is_configured": true, 00:14:27.945 "data_offset": 2048, 00:14:27.945 "data_size": 63488 00:14:27.945 }, 00:14:27.946 { 00:14:27.946 "name": "BaseBdev3", 00:14:27.946 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:27.946 "is_configured": true, 00:14:27.946 "data_offset": 2048, 00:14:27.946 "data_size": 63488 00:14:27.946 }, 00:14:27.946 { 00:14:27.946 "name": "BaseBdev4", 00:14:27.946 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:27.946 "is_configured": true, 00:14:27.946 "data_offset": 2048, 00:14:27.946 "data_size": 63488 00:14:27.946 } 00:14:27.946 ] 00:14:27.946 }' 00:14:27.946 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.946 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.206 "name": "raid_bdev1", 00:14:28.206 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:28.206 "strip_size_kb": 0, 00:14:28.206 "state": "online", 00:14:28.206 "raid_level": "raid1", 00:14:28.206 "superblock": true, 00:14:28.206 "num_base_bdevs": 4, 00:14:28.206 "num_base_bdevs_discovered": 3, 00:14:28.206 "num_base_bdevs_operational": 3, 00:14:28.206 "base_bdevs_list": [ 00:14:28.206 { 00:14:28.206 "name": null, 00:14:28.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.206 "is_configured": false, 00:14:28.206 "data_offset": 0, 00:14:28.206 "data_size": 63488 00:14:28.206 }, 00:14:28.206 { 00:14:28.206 "name": "BaseBdev2", 00:14:28.206 "uuid": "dc39e374-5e54-51a7-bec6-7e802cee226e", 00:14:28.206 "is_configured": true, 00:14:28.206 "data_offset": 2048, 00:14:28.206 "data_size": 63488 00:14:28.206 }, 00:14:28.206 { 00:14:28.206 "name": "BaseBdev3", 00:14:28.206 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:28.206 "is_configured": true, 00:14:28.206 "data_offset": 2048, 00:14:28.206 "data_size": 63488 00:14:28.206 }, 00:14:28.206 { 00:14:28.206 "name": "BaseBdev4", 00:14:28.206 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:28.206 "is_configured": true, 00:14:28.206 "data_offset": 2048, 00:14:28.206 "data_size": 63488 00:14:28.206 } 00:14:28.206 ] 00:14:28.206 }' 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.206 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.206 [2024-10-21 09:59:04.785253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.206 [2024-10-21 09:59:04.800740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:28.466 09:59:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.466 09:59:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:28.466 [2024-10-21 09:59:04.802955] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.405 "name": "raid_bdev1", 00:14:29.405 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:29.405 "strip_size_kb": 0, 00:14:29.405 "state": "online", 00:14:29.405 "raid_level": "raid1", 00:14:29.405 "superblock": true, 00:14:29.405 "num_base_bdevs": 4, 00:14:29.405 "num_base_bdevs_discovered": 4, 00:14:29.405 "num_base_bdevs_operational": 4, 00:14:29.405 "process": { 00:14:29.405 "type": "rebuild", 00:14:29.405 "target": "spare", 00:14:29.405 "progress": { 00:14:29.405 "blocks": 20480, 00:14:29.405 "percent": 32 00:14:29.405 } 00:14:29.405 }, 00:14:29.405 "base_bdevs_list": [ 00:14:29.405 { 00:14:29.405 "name": "spare", 00:14:29.405 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:29.405 "is_configured": true, 00:14:29.405 "data_offset": 2048, 00:14:29.405 "data_size": 63488 00:14:29.405 }, 00:14:29.405 { 00:14:29.405 "name": "BaseBdev2", 00:14:29.405 "uuid": "dc39e374-5e54-51a7-bec6-7e802cee226e", 00:14:29.405 "is_configured": true, 00:14:29.405 "data_offset": 2048, 00:14:29.405 "data_size": 63488 00:14:29.405 }, 00:14:29.405 { 00:14:29.405 "name": "BaseBdev3", 00:14:29.405 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:29.405 "is_configured": true, 00:14:29.405 "data_offset": 2048, 00:14:29.405 "data_size": 63488 00:14:29.405 }, 00:14:29.405 { 00:14:29.405 "name": "BaseBdev4", 00:14:29.405 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:29.405 "is_configured": true, 00:14:29.405 "data_offset": 2048, 00:14:29.405 "data_size": 63488 00:14:29.405 } 00:14:29.405 ] 00:14:29.405 }' 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:29.405 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.405 09:59:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.405 [2024-10-21 09:59:05.951469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.665 [2024-10-21 09:59:06.112248] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.665 "name": "raid_bdev1", 00:14:29.665 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:29.665 "strip_size_kb": 0, 00:14:29.665 "state": "online", 00:14:29.665 "raid_level": "raid1", 00:14:29.665 "superblock": true, 00:14:29.665 "num_base_bdevs": 4, 00:14:29.665 "num_base_bdevs_discovered": 3, 00:14:29.665 "num_base_bdevs_operational": 3, 00:14:29.665 "process": { 00:14:29.665 "type": "rebuild", 00:14:29.665 "target": "spare", 00:14:29.665 "progress": { 00:14:29.665 "blocks": 24576, 00:14:29.665 "percent": 38 00:14:29.665 } 00:14:29.665 }, 00:14:29.665 "base_bdevs_list": [ 00:14:29.665 { 00:14:29.665 "name": "spare", 00:14:29.665 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:29.665 "is_configured": true, 00:14:29.665 "data_offset": 2048, 00:14:29.665 "data_size": 63488 00:14:29.665 }, 00:14:29.665 { 00:14:29.665 "name": null, 00:14:29.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.665 "is_configured": false, 00:14:29.665 "data_offset": 0, 00:14:29.665 "data_size": 63488 00:14:29.665 }, 00:14:29.665 { 00:14:29.665 "name": "BaseBdev3", 00:14:29.665 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:29.665 "is_configured": true, 00:14:29.665 "data_offset": 2048, 00:14:29.665 "data_size": 63488 00:14:29.665 }, 00:14:29.665 { 00:14:29.665 "name": "BaseBdev4", 00:14:29.665 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:29.665 "is_configured": true, 00:14:29.665 "data_offset": 2048, 00:14:29.665 "data_size": 63488 00:14:29.665 } 00:14:29.665 ] 00:14:29.665 }' 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.665 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=473 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.925 "name": "raid_bdev1", 00:14:29.925 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:29.925 "strip_size_kb": 0, 00:14:29.925 "state": "online", 00:14:29.925 "raid_level": "raid1", 00:14:29.925 "superblock": true, 00:14:29.925 "num_base_bdevs": 4, 00:14:29.925 "num_base_bdevs_discovered": 3, 00:14:29.925 "num_base_bdevs_operational": 3, 00:14:29.925 "process": { 00:14:29.925 "type": "rebuild", 00:14:29.925 "target": "spare", 00:14:29.925 "progress": { 00:14:29.925 "blocks": 26624, 00:14:29.925 "percent": 41 00:14:29.925 } 00:14:29.925 }, 00:14:29.925 "base_bdevs_list": [ 00:14:29.925 { 00:14:29.925 "name": "spare", 00:14:29.925 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:29.925 "is_configured": true, 00:14:29.925 "data_offset": 2048, 00:14:29.925 "data_size": 63488 00:14:29.925 }, 00:14:29.925 { 00:14:29.925 "name": null, 00:14:29.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.925 "is_configured": false, 00:14:29.925 "data_offset": 0, 00:14:29.925 "data_size": 63488 00:14:29.925 }, 00:14:29.925 { 00:14:29.925 "name": "BaseBdev3", 00:14:29.925 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:29.925 "is_configured": true, 00:14:29.925 "data_offset": 2048, 00:14:29.925 "data_size": 63488 00:14:29.925 }, 00:14:29.925 { 00:14:29.925 "name": "BaseBdev4", 00:14:29.925 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:29.925 "is_configured": true, 00:14:29.925 "data_offset": 2048, 00:14:29.925 "data_size": 63488 00:14:29.925 } 00:14:29.925 ] 00:14:29.925 }' 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.925 09:59:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.863 "name": "raid_bdev1", 00:14:30.863 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:30.863 "strip_size_kb": 0, 00:14:30.863 "state": "online", 00:14:30.863 "raid_level": "raid1", 00:14:30.863 "superblock": true, 00:14:30.863 "num_base_bdevs": 4, 00:14:30.863 "num_base_bdevs_discovered": 3, 00:14:30.863 "num_base_bdevs_operational": 3, 00:14:30.863 "process": { 00:14:30.863 "type": "rebuild", 00:14:30.863 "target": "spare", 00:14:30.863 "progress": { 00:14:30.863 "blocks": 49152, 00:14:30.863 "percent": 77 00:14:30.863 } 00:14:30.863 }, 00:14:30.863 "base_bdevs_list": [ 00:14:30.863 { 00:14:30.863 "name": "spare", 00:14:30.863 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:30.863 "is_configured": true, 00:14:30.863 "data_offset": 2048, 00:14:30.863 "data_size": 63488 00:14:30.863 }, 00:14:30.863 { 00:14:30.863 "name": null, 00:14:30.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.863 "is_configured": false, 00:14:30.863 "data_offset": 0, 00:14:30.863 "data_size": 63488 00:14:30.863 }, 00:14:30.863 { 00:14:30.863 "name": "BaseBdev3", 00:14:30.863 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:30.863 "is_configured": true, 00:14:30.863 "data_offset": 2048, 00:14:30.863 "data_size": 63488 00:14:30.863 }, 00:14:30.863 { 00:14:30.863 "name": "BaseBdev4", 00:14:30.863 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:30.863 "is_configured": true, 00:14:30.863 "data_offset": 2048, 00:14:30.863 "data_size": 63488 00:14:30.863 } 00:14:30.863 ] 00:14:30.863 }' 00:14:30.863 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.122 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.122 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.122 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.122 09:59:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.691 [2024-10-21 09:59:08.028355] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:31.691 [2024-10-21 09:59:08.028465] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:31.691 [2024-10-21 09:59:08.028667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.950 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.950 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.950 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.950 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.950 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.950 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.950 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.950 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.950 09:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.950 09:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.210 "name": "raid_bdev1", 00:14:32.210 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:32.210 "strip_size_kb": 0, 00:14:32.210 "state": "online", 00:14:32.210 "raid_level": "raid1", 00:14:32.210 "superblock": true, 00:14:32.210 "num_base_bdevs": 4, 00:14:32.210 "num_base_bdevs_discovered": 3, 00:14:32.210 "num_base_bdevs_operational": 3, 00:14:32.210 "base_bdevs_list": [ 00:14:32.210 { 00:14:32.210 "name": "spare", 00:14:32.210 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:32.210 "is_configured": true, 00:14:32.210 "data_offset": 2048, 00:14:32.210 "data_size": 63488 00:14:32.210 }, 00:14:32.210 { 00:14:32.210 "name": null, 00:14:32.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.210 "is_configured": false, 00:14:32.210 "data_offset": 0, 00:14:32.210 "data_size": 63488 00:14:32.210 }, 00:14:32.210 { 00:14:32.210 "name": "BaseBdev3", 00:14:32.210 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:32.210 "is_configured": true, 00:14:32.210 "data_offset": 2048, 00:14:32.210 "data_size": 63488 00:14:32.210 }, 00:14:32.210 { 00:14:32.210 "name": "BaseBdev4", 00:14:32.210 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:32.210 "is_configured": true, 00:14:32.210 "data_offset": 2048, 00:14:32.210 "data_size": 63488 00:14:32.210 } 00:14:32.210 ] 00:14:32.210 }' 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.210 "name": "raid_bdev1", 00:14:32.210 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:32.210 "strip_size_kb": 0, 00:14:32.210 "state": "online", 00:14:32.210 "raid_level": "raid1", 00:14:32.210 "superblock": true, 00:14:32.210 "num_base_bdevs": 4, 00:14:32.210 "num_base_bdevs_discovered": 3, 00:14:32.210 "num_base_bdevs_operational": 3, 00:14:32.210 "base_bdevs_list": [ 00:14:32.210 { 00:14:32.210 "name": "spare", 00:14:32.210 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:32.210 "is_configured": true, 00:14:32.210 "data_offset": 2048, 00:14:32.210 "data_size": 63488 00:14:32.210 }, 00:14:32.210 { 00:14:32.210 "name": null, 00:14:32.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.210 "is_configured": false, 00:14:32.210 "data_offset": 0, 00:14:32.210 "data_size": 63488 00:14:32.210 }, 00:14:32.210 { 00:14:32.210 "name": "BaseBdev3", 00:14:32.210 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:32.210 "is_configured": true, 00:14:32.210 "data_offset": 2048, 00:14:32.210 "data_size": 63488 00:14:32.210 }, 00:14:32.210 { 00:14:32.210 "name": "BaseBdev4", 00:14:32.210 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:32.210 "is_configured": true, 00:14:32.210 "data_offset": 2048, 00:14:32.210 "data_size": 63488 00:14:32.210 } 00:14:32.210 ] 00:14:32.210 }' 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.210 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.470 "name": "raid_bdev1", 00:14:32.470 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:32.470 "strip_size_kb": 0, 00:14:32.470 "state": "online", 00:14:32.470 "raid_level": "raid1", 00:14:32.470 "superblock": true, 00:14:32.470 "num_base_bdevs": 4, 00:14:32.470 "num_base_bdevs_discovered": 3, 00:14:32.470 "num_base_bdevs_operational": 3, 00:14:32.470 "base_bdevs_list": [ 00:14:32.470 { 00:14:32.470 "name": "spare", 00:14:32.470 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:32.470 "is_configured": true, 00:14:32.470 "data_offset": 2048, 00:14:32.470 "data_size": 63488 00:14:32.470 }, 00:14:32.470 { 00:14:32.470 "name": null, 00:14:32.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.470 "is_configured": false, 00:14:32.470 "data_offset": 0, 00:14:32.470 "data_size": 63488 00:14:32.470 }, 00:14:32.470 { 00:14:32.470 "name": "BaseBdev3", 00:14:32.470 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:32.470 "is_configured": true, 00:14:32.470 "data_offset": 2048, 00:14:32.470 "data_size": 63488 00:14:32.470 }, 00:14:32.470 { 00:14:32.470 "name": "BaseBdev4", 00:14:32.470 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:32.470 "is_configured": true, 00:14:32.470 "data_offset": 2048, 00:14:32.470 "data_size": 63488 00:14:32.470 } 00:14:32.470 ] 00:14:32.470 }' 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.470 09:59:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.730 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.730 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.730 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.730 [2024-10-21 09:59:09.278341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.730 [2024-10-21 09:59:09.278382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.730 [2024-10-21 09:59:09.278492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.730 [2024-10-21 09:59:09.278595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.730 [2024-10-21 09:59:09.278612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:14:32.730 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.730 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.730 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.730 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:32.730 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.730 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:32.989 /dev/nbd0 00:14:32.989 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:33.248 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:33.248 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:33.248 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:33.248 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.249 1+0 records in 00:14:33.249 1+0 records out 00:14:33.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476358 s, 8.6 MB/s 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:33.249 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:33.249 /dev/nbd1 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.509 1+0 records in 00:14:33.509 1+0 records out 00:14:33.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397952 s, 10.3 MB/s 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:33.509 09:59:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:33.509 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:33.509 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.509 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:33.509 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.509 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:33.509 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.509 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:33.812 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:33.812 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:33.812 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:33.812 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.812 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.812 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:33.812 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:33.812 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.812 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.812 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:34.070 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:34.070 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:34.070 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:34.070 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.070 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.071 [2024-10-21 09:59:10.516516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:34.071 [2024-10-21 09:59:10.516624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.071 [2024-10-21 09:59:10.516659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:34.071 [2024-10-21 09:59:10.516671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.071 [2024-10-21 09:59:10.519389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.071 [2024-10-21 09:59:10.519428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:34.071 [2024-10-21 09:59:10.519555] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:34.071 [2024-10-21 09:59:10.519635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.071 [2024-10-21 09:59:10.519811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.071 [2024-10-21 09:59:10.519907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:34.071 spare 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.071 [2024-10-21 09:59:10.619857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:14:34.071 [2024-10-21 09:59:10.619969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:34.071 [2024-10-21 09:59:10.620419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:14:34.071 [2024-10-21 09:59:10.620730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:14:34.071 [2024-10-21 09:59:10.620756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005f00 00:14:34.071 [2024-10-21 09:59:10.621015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.071 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.331 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.331 "name": "raid_bdev1", 00:14:34.331 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:34.331 "strip_size_kb": 0, 00:14:34.331 "state": "online", 00:14:34.331 "raid_level": "raid1", 00:14:34.331 "superblock": true, 00:14:34.331 "num_base_bdevs": 4, 00:14:34.331 "num_base_bdevs_discovered": 3, 00:14:34.331 "num_base_bdevs_operational": 3, 00:14:34.331 "base_bdevs_list": [ 00:14:34.331 { 00:14:34.331 "name": "spare", 00:14:34.331 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:34.331 "is_configured": true, 00:14:34.331 "data_offset": 2048, 00:14:34.331 "data_size": 63488 00:14:34.331 }, 00:14:34.331 { 00:14:34.331 "name": null, 00:14:34.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.331 "is_configured": false, 00:14:34.331 "data_offset": 2048, 00:14:34.331 "data_size": 63488 00:14:34.331 }, 00:14:34.331 { 00:14:34.331 "name": "BaseBdev3", 00:14:34.331 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:34.331 "is_configured": true, 00:14:34.331 "data_offset": 2048, 00:14:34.331 "data_size": 63488 00:14:34.331 }, 00:14:34.331 { 00:14:34.331 "name": "BaseBdev4", 00:14:34.331 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:34.331 "is_configured": true, 00:14:34.331 "data_offset": 2048, 00:14:34.331 "data_size": 63488 00:14:34.331 } 00:14:34.331 ] 00:14:34.331 }' 00:14:34.331 09:59:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.331 09:59:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.591 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.591 "name": "raid_bdev1", 00:14:34.591 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:34.591 "strip_size_kb": 0, 00:14:34.591 "state": "online", 00:14:34.591 "raid_level": "raid1", 00:14:34.591 "superblock": true, 00:14:34.591 "num_base_bdevs": 4, 00:14:34.591 "num_base_bdevs_discovered": 3, 00:14:34.591 "num_base_bdevs_operational": 3, 00:14:34.591 "base_bdevs_list": [ 00:14:34.591 { 00:14:34.591 "name": "spare", 00:14:34.591 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:34.591 "is_configured": true, 00:14:34.591 "data_offset": 2048, 00:14:34.591 "data_size": 63488 00:14:34.591 }, 00:14:34.591 { 00:14:34.592 "name": null, 00:14:34.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.592 "is_configured": false, 00:14:34.592 "data_offset": 2048, 00:14:34.592 "data_size": 63488 00:14:34.592 }, 00:14:34.592 { 00:14:34.592 "name": "BaseBdev3", 00:14:34.592 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:34.592 "is_configured": true, 00:14:34.592 "data_offset": 2048, 00:14:34.592 "data_size": 63488 00:14:34.592 }, 00:14:34.592 { 00:14:34.592 "name": "BaseBdev4", 00:14:34.592 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:34.592 "is_configured": true, 00:14:34.592 "data_offset": 2048, 00:14:34.592 "data_size": 63488 00:14:34.592 } 00:14:34.592 ] 00:14:34.592 }' 00:14:34.592 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.851 [2024-10-21 09:59:11.295852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.851 "name": "raid_bdev1", 00:14:34.851 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:34.851 "strip_size_kb": 0, 00:14:34.851 "state": "online", 00:14:34.851 "raid_level": "raid1", 00:14:34.851 "superblock": true, 00:14:34.851 "num_base_bdevs": 4, 00:14:34.851 "num_base_bdevs_discovered": 2, 00:14:34.851 "num_base_bdevs_operational": 2, 00:14:34.851 "base_bdevs_list": [ 00:14:34.851 { 00:14:34.851 "name": null, 00:14:34.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.851 "is_configured": false, 00:14:34.851 "data_offset": 0, 00:14:34.851 "data_size": 63488 00:14:34.851 }, 00:14:34.851 { 00:14:34.851 "name": null, 00:14:34.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.851 "is_configured": false, 00:14:34.851 "data_offset": 2048, 00:14:34.851 "data_size": 63488 00:14:34.851 }, 00:14:34.851 { 00:14:34.851 "name": "BaseBdev3", 00:14:34.851 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:34.851 "is_configured": true, 00:14:34.851 "data_offset": 2048, 00:14:34.851 "data_size": 63488 00:14:34.851 }, 00:14:34.851 { 00:14:34.851 "name": "BaseBdev4", 00:14:34.851 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:34.851 "is_configured": true, 00:14:34.851 "data_offset": 2048, 00:14:34.851 "data_size": 63488 00:14:34.851 } 00:14:34.851 ] 00:14:34.851 }' 00:14:34.851 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.852 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.421 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:35.421 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.421 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.421 [2024-10-21 09:59:11.755094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.421 [2024-10-21 09:59:11.755345] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:35.421 [2024-10-21 09:59:11.755365] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:35.421 [2024-10-21 09:59:11.755412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.421 [2024-10-21 09:59:11.771670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:35.421 09:59:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.421 09:59:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:35.421 [2024-10-21 09:59:11.773880] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.361 "name": "raid_bdev1", 00:14:36.361 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:36.361 "strip_size_kb": 0, 00:14:36.361 "state": "online", 00:14:36.361 "raid_level": "raid1", 00:14:36.361 "superblock": true, 00:14:36.361 "num_base_bdevs": 4, 00:14:36.361 "num_base_bdevs_discovered": 3, 00:14:36.361 "num_base_bdevs_operational": 3, 00:14:36.361 "process": { 00:14:36.361 "type": "rebuild", 00:14:36.361 "target": "spare", 00:14:36.361 "progress": { 00:14:36.361 "blocks": 20480, 00:14:36.361 "percent": 32 00:14:36.361 } 00:14:36.361 }, 00:14:36.361 "base_bdevs_list": [ 00:14:36.361 { 00:14:36.361 "name": "spare", 00:14:36.361 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:36.361 "is_configured": true, 00:14:36.361 "data_offset": 2048, 00:14:36.361 "data_size": 63488 00:14:36.361 }, 00:14:36.361 { 00:14:36.361 "name": null, 00:14:36.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.361 "is_configured": false, 00:14:36.361 "data_offset": 2048, 00:14:36.361 "data_size": 63488 00:14:36.361 }, 00:14:36.361 { 00:14:36.361 "name": "BaseBdev3", 00:14:36.361 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:36.361 "is_configured": true, 00:14:36.361 "data_offset": 2048, 00:14:36.361 "data_size": 63488 00:14:36.361 }, 00:14:36.361 { 00:14:36.361 "name": "BaseBdev4", 00:14:36.361 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:36.361 "is_configured": true, 00:14:36.361 "data_offset": 2048, 00:14:36.361 "data_size": 63488 00:14:36.361 } 00:14:36.361 ] 00:14:36.361 }' 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.361 09:59:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.361 [2024-10-21 09:59:12.937283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.621 [2024-10-21 09:59:12.983994] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:36.621 [2024-10-21 09:59:12.984052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.621 [2024-10-21 09:59:12.984071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.621 [2024-10-21 09:59:12.984079] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.621 "name": "raid_bdev1", 00:14:36.621 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:36.621 "strip_size_kb": 0, 00:14:36.621 "state": "online", 00:14:36.621 "raid_level": "raid1", 00:14:36.621 "superblock": true, 00:14:36.621 "num_base_bdevs": 4, 00:14:36.621 "num_base_bdevs_discovered": 2, 00:14:36.621 "num_base_bdevs_operational": 2, 00:14:36.621 "base_bdevs_list": [ 00:14:36.621 { 00:14:36.621 "name": null, 00:14:36.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.621 "is_configured": false, 00:14:36.621 "data_offset": 0, 00:14:36.621 "data_size": 63488 00:14:36.621 }, 00:14:36.621 { 00:14:36.621 "name": null, 00:14:36.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.621 "is_configured": false, 00:14:36.621 "data_offset": 2048, 00:14:36.621 "data_size": 63488 00:14:36.621 }, 00:14:36.621 { 00:14:36.621 "name": "BaseBdev3", 00:14:36.621 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:36.621 "is_configured": true, 00:14:36.621 "data_offset": 2048, 00:14:36.621 "data_size": 63488 00:14:36.621 }, 00:14:36.621 { 00:14:36.621 "name": "BaseBdev4", 00:14:36.621 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:36.621 "is_configured": true, 00:14:36.621 "data_offset": 2048, 00:14:36.621 "data_size": 63488 00:14:36.621 } 00:14:36.621 ] 00:14:36.621 }' 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.621 09:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.880 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:36.880 09:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.880 09:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.880 [2024-10-21 09:59:13.471120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:36.880 [2024-10-21 09:59:13.471197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.880 [2024-10-21 09:59:13.471232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:36.880 [2024-10-21 09:59:13.471244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.880 [2024-10-21 09:59:13.471853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.881 [2024-10-21 09:59:13.471872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:36.881 [2024-10-21 09:59:13.471986] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:36.881 [2024-10-21 09:59:13.472000] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:36.881 [2024-10-21 09:59:13.472014] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:36.881 [2024-10-21 09:59:13.472038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.140 [2024-10-21 09:59:13.487926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:37.140 spare 00:14:37.140 09:59:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.140 09:59:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:37.140 [2024-10-21 09:59:13.490117] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.080 "name": "raid_bdev1", 00:14:38.080 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:38.080 "strip_size_kb": 0, 00:14:38.080 "state": "online", 00:14:38.080 "raid_level": "raid1", 00:14:38.080 "superblock": true, 00:14:38.080 "num_base_bdevs": 4, 00:14:38.080 "num_base_bdevs_discovered": 3, 00:14:38.080 "num_base_bdevs_operational": 3, 00:14:38.080 "process": { 00:14:38.080 "type": "rebuild", 00:14:38.080 "target": "spare", 00:14:38.080 "progress": { 00:14:38.080 "blocks": 20480, 00:14:38.080 "percent": 32 00:14:38.080 } 00:14:38.080 }, 00:14:38.080 "base_bdevs_list": [ 00:14:38.080 { 00:14:38.080 "name": "spare", 00:14:38.080 "uuid": "6d86f9ea-1eba-55e8-9a32-1c9cfabc7475", 00:14:38.080 "is_configured": true, 00:14:38.080 "data_offset": 2048, 00:14:38.080 "data_size": 63488 00:14:38.080 }, 00:14:38.080 { 00:14:38.080 "name": null, 00:14:38.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.080 "is_configured": false, 00:14:38.080 "data_offset": 2048, 00:14:38.080 "data_size": 63488 00:14:38.080 }, 00:14:38.080 { 00:14:38.080 "name": "BaseBdev3", 00:14:38.080 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:38.080 "is_configured": true, 00:14:38.080 "data_offset": 2048, 00:14:38.080 "data_size": 63488 00:14:38.080 }, 00:14:38.080 { 00:14:38.080 "name": "BaseBdev4", 00:14:38.080 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:38.080 "is_configured": true, 00:14:38.080 "data_offset": 2048, 00:14:38.080 "data_size": 63488 00:14:38.080 } 00:14:38.080 ] 00:14:38.080 }' 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.080 09:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.080 [2024-10-21 09:59:14.649875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.339 [2024-10-21 09:59:14.699445] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:38.339 [2024-10-21 09:59:14.699509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.339 [2024-10-21 09:59:14.699526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.339 [2024-10-21 09:59:14.699536] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.339 "name": "raid_bdev1", 00:14:38.339 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:38.339 "strip_size_kb": 0, 00:14:38.339 "state": "online", 00:14:38.339 "raid_level": "raid1", 00:14:38.339 "superblock": true, 00:14:38.339 "num_base_bdevs": 4, 00:14:38.339 "num_base_bdevs_discovered": 2, 00:14:38.339 "num_base_bdevs_operational": 2, 00:14:38.339 "base_bdevs_list": [ 00:14:38.339 { 00:14:38.339 "name": null, 00:14:38.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.339 "is_configured": false, 00:14:38.339 "data_offset": 0, 00:14:38.339 "data_size": 63488 00:14:38.339 }, 00:14:38.339 { 00:14:38.339 "name": null, 00:14:38.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.339 "is_configured": false, 00:14:38.339 "data_offset": 2048, 00:14:38.339 "data_size": 63488 00:14:38.339 }, 00:14:38.339 { 00:14:38.339 "name": "BaseBdev3", 00:14:38.339 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:38.339 "is_configured": true, 00:14:38.339 "data_offset": 2048, 00:14:38.339 "data_size": 63488 00:14:38.339 }, 00:14:38.339 { 00:14:38.339 "name": "BaseBdev4", 00:14:38.339 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:38.339 "is_configured": true, 00:14:38.339 "data_offset": 2048, 00:14:38.339 "data_size": 63488 00:14:38.339 } 00:14:38.339 ] 00:14:38.339 }' 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.339 09:59:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.599 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.599 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.599 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.599 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.599 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.599 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.599 09:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.599 09:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.599 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.599 09:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.859 "name": "raid_bdev1", 00:14:38.859 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:38.859 "strip_size_kb": 0, 00:14:38.859 "state": "online", 00:14:38.859 "raid_level": "raid1", 00:14:38.859 "superblock": true, 00:14:38.859 "num_base_bdevs": 4, 00:14:38.859 "num_base_bdevs_discovered": 2, 00:14:38.859 "num_base_bdevs_operational": 2, 00:14:38.859 "base_bdevs_list": [ 00:14:38.859 { 00:14:38.859 "name": null, 00:14:38.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.859 "is_configured": false, 00:14:38.859 "data_offset": 0, 00:14:38.859 "data_size": 63488 00:14:38.859 }, 00:14:38.859 { 00:14:38.859 "name": null, 00:14:38.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.859 "is_configured": false, 00:14:38.859 "data_offset": 2048, 00:14:38.859 "data_size": 63488 00:14:38.859 }, 00:14:38.859 { 00:14:38.859 "name": "BaseBdev3", 00:14:38.859 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:38.859 "is_configured": true, 00:14:38.859 "data_offset": 2048, 00:14:38.859 "data_size": 63488 00:14:38.859 }, 00:14:38.859 { 00:14:38.859 "name": "BaseBdev4", 00:14:38.859 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:38.859 "is_configured": true, 00:14:38.859 "data_offset": 2048, 00:14:38.859 "data_size": 63488 00:14:38.859 } 00:14:38.859 ] 00:14:38.859 }' 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.859 [2024-10-21 09:59:15.278284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:38.859 [2024-10-21 09:59:15.278348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.859 [2024-10-21 09:59:15.278372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:38.859 [2024-10-21 09:59:15.278385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.859 [2024-10-21 09:59:15.278971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.859 [2024-10-21 09:59:15.278999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:38.859 [2024-10-21 09:59:15.279094] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:38.859 [2024-10-21 09:59:15.279116] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:38.859 [2024-10-21 09:59:15.279126] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:38.859 [2024-10-21 09:59:15.279144] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:38.859 BaseBdev1 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.859 09:59:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.798 "name": "raid_bdev1", 00:14:39.798 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:39.798 "strip_size_kb": 0, 00:14:39.798 "state": "online", 00:14:39.798 "raid_level": "raid1", 00:14:39.798 "superblock": true, 00:14:39.798 "num_base_bdevs": 4, 00:14:39.798 "num_base_bdevs_discovered": 2, 00:14:39.798 "num_base_bdevs_operational": 2, 00:14:39.798 "base_bdevs_list": [ 00:14:39.798 { 00:14:39.798 "name": null, 00:14:39.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.798 "is_configured": false, 00:14:39.798 "data_offset": 0, 00:14:39.798 "data_size": 63488 00:14:39.798 }, 00:14:39.798 { 00:14:39.798 "name": null, 00:14:39.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.798 "is_configured": false, 00:14:39.798 "data_offset": 2048, 00:14:39.798 "data_size": 63488 00:14:39.798 }, 00:14:39.798 { 00:14:39.798 "name": "BaseBdev3", 00:14:39.798 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:39.798 "is_configured": true, 00:14:39.798 "data_offset": 2048, 00:14:39.798 "data_size": 63488 00:14:39.798 }, 00:14:39.798 { 00:14:39.798 "name": "BaseBdev4", 00:14:39.798 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:39.798 "is_configured": true, 00:14:39.798 "data_offset": 2048, 00:14:39.798 "data_size": 63488 00:14:39.798 } 00:14:39.798 ] 00:14:39.798 }' 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.798 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.367 "name": "raid_bdev1", 00:14:40.367 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:40.367 "strip_size_kb": 0, 00:14:40.367 "state": "online", 00:14:40.367 "raid_level": "raid1", 00:14:40.367 "superblock": true, 00:14:40.367 "num_base_bdevs": 4, 00:14:40.367 "num_base_bdevs_discovered": 2, 00:14:40.367 "num_base_bdevs_operational": 2, 00:14:40.367 "base_bdevs_list": [ 00:14:40.367 { 00:14:40.367 "name": null, 00:14:40.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.367 "is_configured": false, 00:14:40.367 "data_offset": 0, 00:14:40.367 "data_size": 63488 00:14:40.367 }, 00:14:40.367 { 00:14:40.367 "name": null, 00:14:40.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.367 "is_configured": false, 00:14:40.367 "data_offset": 2048, 00:14:40.367 "data_size": 63488 00:14:40.367 }, 00:14:40.367 { 00:14:40.367 "name": "BaseBdev3", 00:14:40.367 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:40.367 "is_configured": true, 00:14:40.367 "data_offset": 2048, 00:14:40.367 "data_size": 63488 00:14:40.367 }, 00:14:40.367 { 00:14:40.367 "name": "BaseBdev4", 00:14:40.367 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:40.367 "is_configured": true, 00:14:40.367 "data_offset": 2048, 00:14:40.367 "data_size": 63488 00:14:40.367 } 00:14:40.367 ] 00:14:40.367 }' 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.367 [2024-10-21 09:59:16.883627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.367 [2024-10-21 09:59:16.883839] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:40.367 [2024-10-21 09:59:16.883858] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:40.367 request: 00:14:40.367 { 00:14:40.367 "base_bdev": "BaseBdev1", 00:14:40.367 "raid_bdev": "raid_bdev1", 00:14:40.367 "method": "bdev_raid_add_base_bdev", 00:14:40.367 "req_id": 1 00:14:40.367 } 00:14:40.367 Got JSON-RPC error response 00:14:40.367 response: 00:14:40.367 { 00:14:40.367 "code": -22, 00:14:40.367 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:40.367 } 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:40.367 09:59:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:41.307 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:41.307 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.307 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.307 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.307 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.307 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.307 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.307 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.307 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.307 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.567 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.567 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.567 09:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.567 09:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.567 09:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.567 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.567 "name": "raid_bdev1", 00:14:41.567 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:41.567 "strip_size_kb": 0, 00:14:41.567 "state": "online", 00:14:41.567 "raid_level": "raid1", 00:14:41.567 "superblock": true, 00:14:41.567 "num_base_bdevs": 4, 00:14:41.567 "num_base_bdevs_discovered": 2, 00:14:41.567 "num_base_bdevs_operational": 2, 00:14:41.567 "base_bdevs_list": [ 00:14:41.567 { 00:14:41.567 "name": null, 00:14:41.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.567 "is_configured": false, 00:14:41.567 "data_offset": 0, 00:14:41.567 "data_size": 63488 00:14:41.567 }, 00:14:41.567 { 00:14:41.567 "name": null, 00:14:41.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.567 "is_configured": false, 00:14:41.567 "data_offset": 2048, 00:14:41.567 "data_size": 63488 00:14:41.567 }, 00:14:41.567 { 00:14:41.567 "name": "BaseBdev3", 00:14:41.567 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:41.567 "is_configured": true, 00:14:41.567 "data_offset": 2048, 00:14:41.567 "data_size": 63488 00:14:41.567 }, 00:14:41.567 { 00:14:41.567 "name": "BaseBdev4", 00:14:41.567 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:41.567 "is_configured": true, 00:14:41.567 "data_offset": 2048, 00:14:41.567 "data_size": 63488 00:14:41.567 } 00:14:41.567 ] 00:14:41.567 }' 00:14:41.567 09:59:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.567 09:59:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.828 "name": "raid_bdev1", 00:14:41.828 "uuid": "8495a811-050e-4e2d-90e1-4ad2962f5c55", 00:14:41.828 "strip_size_kb": 0, 00:14:41.828 "state": "online", 00:14:41.828 "raid_level": "raid1", 00:14:41.828 "superblock": true, 00:14:41.828 "num_base_bdevs": 4, 00:14:41.828 "num_base_bdevs_discovered": 2, 00:14:41.828 "num_base_bdevs_operational": 2, 00:14:41.828 "base_bdevs_list": [ 00:14:41.828 { 00:14:41.828 "name": null, 00:14:41.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.828 "is_configured": false, 00:14:41.828 "data_offset": 0, 00:14:41.828 "data_size": 63488 00:14:41.828 }, 00:14:41.828 { 00:14:41.828 "name": null, 00:14:41.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.828 "is_configured": false, 00:14:41.828 "data_offset": 2048, 00:14:41.828 "data_size": 63488 00:14:41.828 }, 00:14:41.828 { 00:14:41.828 "name": "BaseBdev3", 00:14:41.828 "uuid": "800e67f0-35f7-5466-b38b-21fd1e65140a", 00:14:41.828 "is_configured": true, 00:14:41.828 "data_offset": 2048, 00:14:41.828 "data_size": 63488 00:14:41.828 }, 00:14:41.828 { 00:14:41.828 "name": "BaseBdev4", 00:14:41.828 "uuid": "7d2c0ac7-6631-5c49-afb3-7fcff52d067c", 00:14:41.828 "is_configured": true, 00:14:41.828 "data_offset": 2048, 00:14:41.828 "data_size": 63488 00:14:41.828 } 00:14:41.828 ] 00:14:41.828 }' 00:14:41.828 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.088 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77629 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77629 ']' 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 77629 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77629 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.089 killing process with pid 77629 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77629' 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 77629 00:14:42.089 Received shutdown signal, test time was about 60.000000 seconds 00:14:42.089 00:14:42.089 Latency(us) 00:14:42.089 [2024-10-21T09:59:18.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.089 [2024-10-21T09:59:18.684Z] =================================================================================================================== 00:14:42.089 [2024-10-21T09:59:18.684Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:42.089 [2024-10-21 09:59:18.516147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.089 09:59:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 77629 00:14:42.089 [2024-10-21 09:59:18.516298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.089 [2024-10-21 09:59:18.516378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.089 [2024-10-21 09:59:18.516394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state offline 00:14:42.659 [2024-10-21 09:59:19.044541] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:44.040 00:14:44.040 real 0m25.397s 00:14:44.040 user 0m30.810s 00:14:44.040 sys 0m4.020s 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.040 ************************************ 00:14:44.040 END TEST raid_rebuild_test_sb 00:14:44.040 ************************************ 00:14:44.040 09:59:20 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:44.040 09:59:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:44.040 09:59:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.040 09:59:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.040 ************************************ 00:14:44.040 START TEST raid_rebuild_test_io 00:14:44.040 ************************************ 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78388 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78388 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 78388 ']' 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.040 09:59:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.040 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:44.040 Zero copy mechanism will not be used. 00:14:44.040 [2024-10-21 09:59:20.425744] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:14:44.040 [2024-10-21 09:59:20.425895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78388 ] 00:14:44.040 [2024-10-21 09:59:20.606710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.300 [2024-10-21 09:59:20.758256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.560 [2024-10-21 09:59:21.010677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.560 [2024-10-21 09:59:21.010718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.820 BaseBdev1_malloc 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.820 [2024-10-21 09:59:21.311764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.820 [2024-10-21 09:59:21.311839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.820 [2024-10-21 09:59:21.311867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:14:44.820 [2024-10-21 09:59:21.311880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.820 [2024-10-21 09:59:21.314379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.820 [2024-10-21 09:59:21.314415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.820 BaseBdev1 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.820 BaseBdev2_malloc 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.820 [2024-10-21 09:59:21.374689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:44.820 [2024-10-21 09:59:21.374741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.820 [2024-10-21 09:59:21.374761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:44.820 [2024-10-21 09:59:21.374773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.820 [2024-10-21 09:59:21.377120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.820 [2024-10-21 09:59:21.377152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.820 BaseBdev2 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.820 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.081 BaseBdev3_malloc 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.081 [2024-10-21 09:59:21.453262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:45.081 [2024-10-21 09:59:21.453308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.081 [2024-10-21 09:59:21.453330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:45.081 [2024-10-21 09:59:21.453342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.081 [2024-10-21 09:59:21.455644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.081 [2024-10-21 09:59:21.455677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:45.081 BaseBdev3 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.081 BaseBdev4_malloc 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.081 [2024-10-21 09:59:21.520448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:45.081 [2024-10-21 09:59:21.520496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.081 [2024-10-21 09:59:21.520514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:45.081 [2024-10-21 09:59:21.520526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.081 [2024-10-21 09:59:21.522903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.081 [2024-10-21 09:59:21.522937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:45.081 BaseBdev4 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.081 spare_malloc 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.081 spare_delay 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.081 [2024-10-21 09:59:21.595580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.081 [2024-10-21 09:59:21.595638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.081 [2024-10-21 09:59:21.595658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:45.081 [2024-10-21 09:59:21.595669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.081 [2024-10-21 09:59:21.597990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.081 [2024-10-21 09:59:21.598022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.081 spare 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.081 [2024-10-21 09:59:21.607613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.081 [2024-10-21 09:59:21.609669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.081 [2024-10-21 09:59:21.609743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.081 [2024-10-21 09:59:21.609792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:45.081 [2024-10-21 09:59:21.609871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:14:45.081 [2024-10-21 09:59:21.609888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:45.081 [2024-10-21 09:59:21.610136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:45.081 [2024-10-21 09:59:21.610310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:14:45.081 [2024-10-21 09:59:21.610326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:14:45.081 [2024-10-21 09:59:21.610498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.081 "name": "raid_bdev1", 00:14:45.081 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:45.081 "strip_size_kb": 0, 00:14:45.081 "state": "online", 00:14:45.081 "raid_level": "raid1", 00:14:45.081 "superblock": false, 00:14:45.081 "num_base_bdevs": 4, 00:14:45.081 "num_base_bdevs_discovered": 4, 00:14:45.081 "num_base_bdevs_operational": 4, 00:14:45.081 "base_bdevs_list": [ 00:14:45.081 { 00:14:45.081 "name": "BaseBdev1", 00:14:45.081 "uuid": "c8d2efc1-cfaa-502a-8c3c-eb3ee097e8e2", 00:14:45.081 "is_configured": true, 00:14:45.081 "data_offset": 0, 00:14:45.081 "data_size": 65536 00:14:45.081 }, 00:14:45.081 { 00:14:45.081 "name": "BaseBdev2", 00:14:45.081 "uuid": "474b7f62-41fd-5949-952b-10d0b08cda0e", 00:14:45.081 "is_configured": true, 00:14:45.081 "data_offset": 0, 00:14:45.081 "data_size": 65536 00:14:45.081 }, 00:14:45.081 { 00:14:45.081 "name": "BaseBdev3", 00:14:45.081 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:45.081 "is_configured": true, 00:14:45.081 "data_offset": 0, 00:14:45.081 "data_size": 65536 00:14:45.081 }, 00:14:45.081 { 00:14:45.081 "name": "BaseBdev4", 00:14:45.081 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:45.081 "is_configured": true, 00:14:45.081 "data_offset": 0, 00:14:45.081 "data_size": 65536 00:14:45.081 } 00:14:45.081 ] 00:14:45.081 }' 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.081 09:59:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.651 [2024-10-21 09:59:22.055212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.651 [2024-10-21 09:59:22.158665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.651 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.652 "name": "raid_bdev1", 00:14:45.652 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:45.652 "strip_size_kb": 0, 00:14:45.652 "state": "online", 00:14:45.652 "raid_level": "raid1", 00:14:45.652 "superblock": false, 00:14:45.652 "num_base_bdevs": 4, 00:14:45.652 "num_base_bdevs_discovered": 3, 00:14:45.652 "num_base_bdevs_operational": 3, 00:14:45.652 "base_bdevs_list": [ 00:14:45.652 { 00:14:45.652 "name": null, 00:14:45.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.652 "is_configured": false, 00:14:45.652 "data_offset": 0, 00:14:45.652 "data_size": 65536 00:14:45.652 }, 00:14:45.652 { 00:14:45.652 "name": "BaseBdev2", 00:14:45.652 "uuid": "474b7f62-41fd-5949-952b-10d0b08cda0e", 00:14:45.652 "is_configured": true, 00:14:45.652 "data_offset": 0, 00:14:45.652 "data_size": 65536 00:14:45.652 }, 00:14:45.652 { 00:14:45.652 "name": "BaseBdev3", 00:14:45.652 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:45.652 "is_configured": true, 00:14:45.652 "data_offset": 0, 00:14:45.652 "data_size": 65536 00:14:45.652 }, 00:14:45.652 { 00:14:45.652 "name": "BaseBdev4", 00:14:45.652 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:45.652 "is_configured": true, 00:14:45.652 "data_offset": 0, 00:14:45.652 "data_size": 65536 00:14:45.652 } 00:14:45.652 ] 00:14:45.652 }' 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.652 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.912 [2024-10-21 09:59:22.269778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:45.912 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:45.912 Zero copy mechanism will not be used. 00:14:45.912 Running I/O for 60 seconds... 00:14:46.172 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.172 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.172 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.172 [2024-10-21 09:59:22.621515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.172 09:59:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.172 09:59:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:46.172 [2024-10-21 09:59:22.707810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:46.172 [2024-10-21 09:59:22.710151] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:46.431 [2024-10-21 09:59:22.853018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:46.431 [2024-10-21 09:59:22.998382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:46.431 [2024-10-21 09:59:22.998977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:46.691 172.00 IOPS, 516.00 MiB/s [2024-10-21T09:59:23.286Z] [2024-10-21 09:59:23.271780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:46.691 [2024-10-21 09:59:23.274131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:46.951 [2024-10-21 09:59:23.530321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:46.951 [2024-10-21 09:59:23.530866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:47.210 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.210 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.210 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.210 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.210 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.210 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.210 09:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.210 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.211 09:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.211 09:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.211 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.211 "name": "raid_bdev1", 00:14:47.211 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:47.211 "strip_size_kb": 0, 00:14:47.211 "state": "online", 00:14:47.211 "raid_level": "raid1", 00:14:47.211 "superblock": false, 00:14:47.211 "num_base_bdevs": 4, 00:14:47.211 "num_base_bdevs_discovered": 4, 00:14:47.211 "num_base_bdevs_operational": 4, 00:14:47.211 "process": { 00:14:47.211 "type": "rebuild", 00:14:47.211 "target": "spare", 00:14:47.211 "progress": { 00:14:47.211 "blocks": 10240, 00:14:47.211 "percent": 15 00:14:47.211 } 00:14:47.211 }, 00:14:47.211 "base_bdevs_list": [ 00:14:47.211 { 00:14:47.211 "name": "spare", 00:14:47.211 "uuid": "217821ac-228f-5e2c-9035-ce7a716c3427", 00:14:47.211 "is_configured": true, 00:14:47.211 "data_offset": 0, 00:14:47.211 "data_size": 65536 00:14:47.211 }, 00:14:47.211 { 00:14:47.211 "name": "BaseBdev2", 00:14:47.211 "uuid": "474b7f62-41fd-5949-952b-10d0b08cda0e", 00:14:47.211 "is_configured": true, 00:14:47.211 "data_offset": 0, 00:14:47.211 "data_size": 65536 00:14:47.211 }, 00:14:47.211 { 00:14:47.211 "name": "BaseBdev3", 00:14:47.211 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:47.211 "is_configured": true, 00:14:47.211 "data_offset": 0, 00:14:47.211 "data_size": 65536 00:14:47.211 }, 00:14:47.211 { 00:14:47.211 "name": "BaseBdev4", 00:14:47.211 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:47.211 "is_configured": true, 00:14:47.211 "data_offset": 0, 00:14:47.211 "data_size": 65536 00:14:47.211 } 00:14:47.211 ] 00:14:47.211 }' 00:14:47.211 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.211 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.211 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.489 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.489 09:59:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:47.489 09:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.489 09:59:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.489 [2024-10-21 09:59:23.836112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.489 [2024-10-21 09:59:23.845842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:47.489 [2024-10-21 09:59:23.951429] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:47.489 [2024-10-21 09:59:23.967885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.489 [2024-10-21 09:59:23.967961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.489 [2024-10-21 09:59:23.967992] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:47.489 [2024-10-21 09:59:24.000254] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.489 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.490 "name": "raid_bdev1", 00:14:47.490 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:47.490 "strip_size_kb": 0, 00:14:47.490 "state": "online", 00:14:47.490 "raid_level": "raid1", 00:14:47.490 "superblock": false, 00:14:47.490 "num_base_bdevs": 4, 00:14:47.490 "num_base_bdevs_discovered": 3, 00:14:47.490 "num_base_bdevs_operational": 3, 00:14:47.490 "base_bdevs_list": [ 00:14:47.490 { 00:14:47.490 "name": null, 00:14:47.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.490 "is_configured": false, 00:14:47.490 "data_offset": 0, 00:14:47.490 "data_size": 65536 00:14:47.490 }, 00:14:47.490 { 00:14:47.490 "name": "BaseBdev2", 00:14:47.490 "uuid": "474b7f62-41fd-5949-952b-10d0b08cda0e", 00:14:47.490 "is_configured": true, 00:14:47.490 "data_offset": 0, 00:14:47.490 "data_size": 65536 00:14:47.490 }, 00:14:47.490 { 00:14:47.490 "name": "BaseBdev3", 00:14:47.490 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:47.490 "is_configured": true, 00:14:47.490 "data_offset": 0, 00:14:47.490 "data_size": 65536 00:14:47.490 }, 00:14:47.490 { 00:14:47.490 "name": "BaseBdev4", 00:14:47.490 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:47.490 "is_configured": true, 00:14:47.490 "data_offset": 0, 00:14:47.490 "data_size": 65536 00:14:47.490 } 00:14:47.490 ] 00:14:47.490 }' 00:14:47.490 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.490 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.009 140.50 IOPS, 421.50 MiB/s [2024-10-21T09:59:24.604Z] 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.009 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.009 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.009 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.009 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.009 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.009 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.009 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.009 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.009 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.009 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.009 "name": "raid_bdev1", 00:14:48.009 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:48.009 "strip_size_kb": 0, 00:14:48.009 "state": "online", 00:14:48.009 "raid_level": "raid1", 00:14:48.010 "superblock": false, 00:14:48.010 "num_base_bdevs": 4, 00:14:48.010 "num_base_bdevs_discovered": 3, 00:14:48.010 "num_base_bdevs_operational": 3, 00:14:48.010 "base_bdevs_list": [ 00:14:48.010 { 00:14:48.010 "name": null, 00:14:48.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.010 "is_configured": false, 00:14:48.010 "data_offset": 0, 00:14:48.010 "data_size": 65536 00:14:48.010 }, 00:14:48.010 { 00:14:48.010 "name": "BaseBdev2", 00:14:48.010 "uuid": "474b7f62-41fd-5949-952b-10d0b08cda0e", 00:14:48.010 "is_configured": true, 00:14:48.010 "data_offset": 0, 00:14:48.010 "data_size": 65536 00:14:48.010 }, 00:14:48.010 { 00:14:48.010 "name": "BaseBdev3", 00:14:48.010 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:48.010 "is_configured": true, 00:14:48.010 "data_offset": 0, 00:14:48.010 "data_size": 65536 00:14:48.010 }, 00:14:48.010 { 00:14:48.010 "name": "BaseBdev4", 00:14:48.010 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:48.010 "is_configured": true, 00:14:48.010 "data_offset": 0, 00:14:48.010 "data_size": 65536 00:14:48.010 } 00:14:48.010 ] 00:14:48.010 }' 00:14:48.010 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.010 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.010 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.270 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.270 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:48.270 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.270 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.270 [2024-10-21 09:59:24.643865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.270 09:59:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.270 09:59:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:48.270 [2024-10-21 09:59:24.716085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:48.270 [2024-10-21 09:59:24.718454] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.270 [2024-10-21 09:59:24.833342] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:48.270 [2024-10-21 09:59:24.835830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:48.840 152.67 IOPS, 458.00 MiB/s [2024-10-21T09:59:25.435Z] [2024-10-21 09:59:25.375535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:49.100 [2024-10-21 09:59:25.494877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:49.100 [2024-10-21 09:59:25.495416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.359 "name": "raid_bdev1", 00:14:49.359 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:49.359 "strip_size_kb": 0, 00:14:49.359 "state": "online", 00:14:49.359 "raid_level": "raid1", 00:14:49.359 "superblock": false, 00:14:49.359 "num_base_bdevs": 4, 00:14:49.359 "num_base_bdevs_discovered": 4, 00:14:49.359 "num_base_bdevs_operational": 4, 00:14:49.359 "process": { 00:14:49.359 "type": "rebuild", 00:14:49.359 "target": "spare", 00:14:49.359 "progress": { 00:14:49.359 "blocks": 12288, 00:14:49.359 "percent": 18 00:14:49.359 } 00:14:49.359 }, 00:14:49.359 "base_bdevs_list": [ 00:14:49.359 { 00:14:49.359 "name": "spare", 00:14:49.359 "uuid": "217821ac-228f-5e2c-9035-ce7a716c3427", 00:14:49.359 "is_configured": true, 00:14:49.359 "data_offset": 0, 00:14:49.359 "data_size": 65536 00:14:49.359 }, 00:14:49.359 { 00:14:49.359 "name": "BaseBdev2", 00:14:49.359 "uuid": "474b7f62-41fd-5949-952b-10d0b08cda0e", 00:14:49.359 "is_configured": true, 00:14:49.359 "data_offset": 0, 00:14:49.359 "data_size": 65536 00:14:49.359 }, 00:14:49.359 { 00:14:49.359 "name": "BaseBdev3", 00:14:49.359 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:49.359 "is_configured": true, 00:14:49.359 "data_offset": 0, 00:14:49.359 "data_size": 65536 00:14:49.359 }, 00:14:49.359 { 00:14:49.359 "name": "BaseBdev4", 00:14:49.359 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:49.359 "is_configured": true, 00:14:49.359 "data_offset": 0, 00:14:49.359 "data_size": 65536 00:14:49.359 } 00:14:49.359 ] 00:14:49.359 }' 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.359 09:59:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.359 [2024-10-21 09:59:25.831514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.359 [2024-10-21 09:59:25.879622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:49.360 [2024-10-21 09:59:25.880968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:49.619 [2024-10-21 09:59:25.990501] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005fb0 00:14:49.619 [2024-10-21 09:59:25.990581] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.619 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.619 "name": "raid_bdev1", 00:14:49.619 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:49.619 "strip_size_kb": 0, 00:14:49.619 "state": "online", 00:14:49.619 "raid_level": "raid1", 00:14:49.619 "superblock": false, 00:14:49.619 "num_base_bdevs": 4, 00:14:49.619 "num_base_bdevs_discovered": 3, 00:14:49.619 "num_base_bdevs_operational": 3, 00:14:49.619 "process": { 00:14:49.619 "type": "rebuild", 00:14:49.619 "target": "spare", 00:14:49.619 "progress": { 00:14:49.619 "blocks": 16384, 00:14:49.619 "percent": 25 00:14:49.619 } 00:14:49.619 }, 00:14:49.619 "base_bdevs_list": [ 00:14:49.619 { 00:14:49.619 "name": "spare", 00:14:49.619 "uuid": "217821ac-228f-5e2c-9035-ce7a716c3427", 00:14:49.620 "is_configured": true, 00:14:49.620 "data_offset": 0, 00:14:49.620 "data_size": 65536 00:14:49.620 }, 00:14:49.620 { 00:14:49.620 "name": null, 00:14:49.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.620 "is_configured": false, 00:14:49.620 "data_offset": 0, 00:14:49.620 "data_size": 65536 00:14:49.620 }, 00:14:49.620 { 00:14:49.620 "name": "BaseBdev3", 00:14:49.620 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:49.620 "is_configured": true, 00:14:49.620 "data_offset": 0, 00:14:49.620 "data_size": 65536 00:14:49.620 }, 00:14:49.620 { 00:14:49.620 "name": "BaseBdev4", 00:14:49.620 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:49.620 "is_configured": true, 00:14:49.620 "data_offset": 0, 00:14:49.620 "data_size": 65536 00:14:49.620 } 00:14:49.620 ] 00:14:49.620 }' 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=493 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.620 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.620 "name": "raid_bdev1", 00:14:49.620 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:49.620 "strip_size_kb": 0, 00:14:49.620 "state": "online", 00:14:49.620 "raid_level": "raid1", 00:14:49.620 "superblock": false, 00:14:49.620 "num_base_bdevs": 4, 00:14:49.620 "num_base_bdevs_discovered": 3, 00:14:49.620 "num_base_bdevs_operational": 3, 00:14:49.620 "process": { 00:14:49.620 "type": "rebuild", 00:14:49.620 "target": "spare", 00:14:49.620 "progress": { 00:14:49.620 "blocks": 18432, 00:14:49.620 "percent": 28 00:14:49.620 } 00:14:49.620 }, 00:14:49.620 "base_bdevs_list": [ 00:14:49.620 { 00:14:49.620 "name": "spare", 00:14:49.620 "uuid": "217821ac-228f-5e2c-9035-ce7a716c3427", 00:14:49.620 "is_configured": true, 00:14:49.620 "data_offset": 0, 00:14:49.620 "data_size": 65536 00:14:49.620 }, 00:14:49.620 { 00:14:49.620 "name": null, 00:14:49.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.620 "is_configured": false, 00:14:49.620 "data_offset": 0, 00:14:49.620 "data_size": 65536 00:14:49.620 }, 00:14:49.620 { 00:14:49.620 "name": "BaseBdev3", 00:14:49.620 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:49.620 "is_configured": true, 00:14:49.620 "data_offset": 0, 00:14:49.620 "data_size": 65536 00:14:49.620 }, 00:14:49.620 { 00:14:49.620 "name": "BaseBdev4", 00:14:49.620 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:49.620 "is_configured": true, 00:14:49.620 "data_offset": 0, 00:14:49.620 "data_size": 65536 00:14:49.620 } 00:14:49.620 ] 00:14:49.620 }' 00:14:49.880 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.880 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.880 128.00 IOPS, 384.00 MiB/s [2024-10-21T09:59:26.475Z] 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.880 [2024-10-21 09:59:26.287443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:49.880 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.880 09:59:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.880 [2024-10-21 09:59:26.397545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:50.140 [2024-10-21 09:59:26.728930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:50.400 [2024-10-21 09:59:26.954086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:50.969 114.00 IOPS, 342.00 MiB/s [2024-10-21T09:59:27.564Z] 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.969 "name": "raid_bdev1", 00:14:50.969 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:50.969 "strip_size_kb": 0, 00:14:50.969 "state": "online", 00:14:50.969 "raid_level": "raid1", 00:14:50.969 "superblock": false, 00:14:50.969 "num_base_bdevs": 4, 00:14:50.969 "num_base_bdevs_discovered": 3, 00:14:50.969 "num_base_bdevs_operational": 3, 00:14:50.969 "process": { 00:14:50.969 "type": "rebuild", 00:14:50.969 "target": "spare", 00:14:50.969 "progress": { 00:14:50.969 "blocks": 34816, 00:14:50.969 "percent": 53 00:14:50.969 } 00:14:50.969 }, 00:14:50.969 "base_bdevs_list": [ 00:14:50.969 { 00:14:50.969 "name": "spare", 00:14:50.969 "uuid": "217821ac-228f-5e2c-9035-ce7a716c3427", 00:14:50.969 "is_configured": true, 00:14:50.969 "data_offset": 0, 00:14:50.969 "data_size": 65536 00:14:50.969 }, 00:14:50.969 { 00:14:50.969 "name": null, 00:14:50.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.969 "is_configured": false, 00:14:50.969 "data_offset": 0, 00:14:50.969 "data_size": 65536 00:14:50.969 }, 00:14:50.969 { 00:14:50.969 "name": "BaseBdev3", 00:14:50.969 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:50.969 "is_configured": true, 00:14:50.969 "data_offset": 0, 00:14:50.969 "data_size": 65536 00:14:50.969 }, 00:14:50.969 { 00:14:50.969 "name": "BaseBdev4", 00:14:50.969 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:50.969 "is_configured": true, 00:14:50.969 "data_offset": 0, 00:14:50.969 "data_size": 65536 00:14:50.969 } 00:14:50.969 ] 00:14:50.969 }' 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.969 09:59:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.969 [2024-10-21 09:59:27.495133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:51.228 [2024-10-21 09:59:27.707418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:51.228 [2024-10-21 09:59:27.708225] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:52.057 102.50 IOPS, 307.50 MiB/s [2024-10-21T09:59:28.652Z] 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.057 "name": "raid_bdev1", 00:14:52.057 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:52.057 "strip_size_kb": 0, 00:14:52.057 "state": "online", 00:14:52.057 "raid_level": "raid1", 00:14:52.057 "superblock": false, 00:14:52.057 "num_base_bdevs": 4, 00:14:52.057 "num_base_bdevs_discovered": 3, 00:14:52.057 "num_base_bdevs_operational": 3, 00:14:52.057 "process": { 00:14:52.057 "type": "rebuild", 00:14:52.057 "target": "spare", 00:14:52.057 "progress": { 00:14:52.057 "blocks": 51200, 00:14:52.057 "percent": 78 00:14:52.057 } 00:14:52.057 }, 00:14:52.057 "base_bdevs_list": [ 00:14:52.057 { 00:14:52.057 "name": "spare", 00:14:52.057 "uuid": "217821ac-228f-5e2c-9035-ce7a716c3427", 00:14:52.057 "is_configured": true, 00:14:52.057 "data_offset": 0, 00:14:52.057 "data_size": 65536 00:14:52.057 }, 00:14:52.057 { 00:14:52.057 "name": null, 00:14:52.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.057 "is_configured": false, 00:14:52.057 "data_offset": 0, 00:14:52.057 "data_size": 65536 00:14:52.057 }, 00:14:52.057 { 00:14:52.057 "name": "BaseBdev3", 00:14:52.057 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:52.057 "is_configured": true, 00:14:52.057 "data_offset": 0, 00:14:52.057 "data_size": 65536 00:14:52.057 }, 00:14:52.057 { 00:14:52.057 "name": "BaseBdev4", 00:14:52.057 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:52.057 "is_configured": true, 00:14:52.057 "data_offset": 0, 00:14:52.057 "data_size": 65536 00:14:52.057 } 00:14:52.057 ] 00:14:52.057 }' 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.057 [2024-10-21 09:59:28.506589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.057 09:59:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.627 [2024-10-21 09:59:29.175883] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:52.887 93.29 IOPS, 279.86 MiB/s [2024-10-21T09:59:29.482Z] [2024-10-21 09:59:29.281416] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:52.887 [2024-10-21 09:59:29.286459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.148 "name": "raid_bdev1", 00:14:53.148 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:53.148 "strip_size_kb": 0, 00:14:53.148 "state": "online", 00:14:53.148 "raid_level": "raid1", 00:14:53.148 "superblock": false, 00:14:53.148 "num_base_bdevs": 4, 00:14:53.148 "num_base_bdevs_discovered": 3, 00:14:53.148 "num_base_bdevs_operational": 3, 00:14:53.148 "base_bdevs_list": [ 00:14:53.148 { 00:14:53.148 "name": "spare", 00:14:53.148 "uuid": "217821ac-228f-5e2c-9035-ce7a716c3427", 00:14:53.148 "is_configured": true, 00:14:53.148 "data_offset": 0, 00:14:53.148 "data_size": 65536 00:14:53.148 }, 00:14:53.148 { 00:14:53.148 "name": null, 00:14:53.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.148 "is_configured": false, 00:14:53.148 "data_offset": 0, 00:14:53.148 "data_size": 65536 00:14:53.148 }, 00:14:53.148 { 00:14:53.148 "name": "BaseBdev3", 00:14:53.148 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:53.148 "is_configured": true, 00:14:53.148 "data_offset": 0, 00:14:53.148 "data_size": 65536 00:14:53.148 }, 00:14:53.148 { 00:14:53.148 "name": "BaseBdev4", 00:14:53.148 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:53.148 "is_configured": true, 00:14:53.148 "data_offset": 0, 00:14:53.148 "data_size": 65536 00:14:53.148 } 00:14:53.148 ] 00:14:53.148 }' 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:53.148 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.408 "name": "raid_bdev1", 00:14:53.408 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:53.408 "strip_size_kb": 0, 00:14:53.408 "state": "online", 00:14:53.408 "raid_level": "raid1", 00:14:53.408 "superblock": false, 00:14:53.408 "num_base_bdevs": 4, 00:14:53.408 "num_base_bdevs_discovered": 3, 00:14:53.408 "num_base_bdevs_operational": 3, 00:14:53.408 "base_bdevs_list": [ 00:14:53.408 { 00:14:53.408 "name": "spare", 00:14:53.408 "uuid": "217821ac-228f-5e2c-9035-ce7a716c3427", 00:14:53.408 "is_configured": true, 00:14:53.408 "data_offset": 0, 00:14:53.408 "data_size": 65536 00:14:53.408 }, 00:14:53.408 { 00:14:53.408 "name": null, 00:14:53.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.408 "is_configured": false, 00:14:53.408 "data_offset": 0, 00:14:53.408 "data_size": 65536 00:14:53.408 }, 00:14:53.408 { 00:14:53.408 "name": "BaseBdev3", 00:14:53.408 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:53.408 "is_configured": true, 00:14:53.408 "data_offset": 0, 00:14:53.408 "data_size": 65536 00:14:53.408 }, 00:14:53.408 { 00:14:53.408 "name": "BaseBdev4", 00:14:53.408 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:53.408 "is_configured": true, 00:14:53.408 "data_offset": 0, 00:14:53.408 "data_size": 65536 00:14:53.408 } 00:14:53.408 ] 00:14:53.408 }' 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.408 "name": "raid_bdev1", 00:14:53.408 "uuid": "2a11240b-57f9-4128-a420-0e8925582a0d", 00:14:53.408 "strip_size_kb": 0, 00:14:53.408 "state": "online", 00:14:53.408 "raid_level": "raid1", 00:14:53.408 "superblock": false, 00:14:53.408 "num_base_bdevs": 4, 00:14:53.408 "num_base_bdevs_discovered": 3, 00:14:53.408 "num_base_bdevs_operational": 3, 00:14:53.408 "base_bdevs_list": [ 00:14:53.408 { 00:14:53.408 "name": "spare", 00:14:53.408 "uuid": "217821ac-228f-5e2c-9035-ce7a716c3427", 00:14:53.408 "is_configured": true, 00:14:53.408 "data_offset": 0, 00:14:53.408 "data_size": 65536 00:14:53.408 }, 00:14:53.408 { 00:14:53.408 "name": null, 00:14:53.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.408 "is_configured": false, 00:14:53.408 "data_offset": 0, 00:14:53.408 "data_size": 65536 00:14:53.408 }, 00:14:53.408 { 00:14:53.408 "name": "BaseBdev3", 00:14:53.408 "uuid": "04018d2f-42cc-537b-8ada-bb095e3baaa9", 00:14:53.408 "is_configured": true, 00:14:53.408 "data_offset": 0, 00:14:53.408 "data_size": 65536 00:14:53.408 }, 00:14:53.408 { 00:14:53.408 "name": "BaseBdev4", 00:14:53.408 "uuid": "a1067d6f-0fa7-5f46-a450-fed30360156a", 00:14:53.408 "is_configured": true, 00:14:53.408 "data_offset": 0, 00:14:53.408 "data_size": 65536 00:14:53.408 } 00:14:53.408 ] 00:14:53.408 }' 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.408 09:59:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.926 86.88 IOPS, 260.62 MiB/s [2024-10-21T09:59:30.521Z] 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.926 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.926 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.926 [2024-10-21 09:59:30.392643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.926 [2024-10-21 09:59:30.392683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.926 00:14:53.926 Latency(us) 00:14:53.926 [2024-10-21T09:59:30.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.926 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:53.926 raid_bdev1 : 8.24 84.99 254.98 0.00 0.00 16942.60 332.69 114473.36 00:14:53.926 [2024-10-21T09:59:30.521Z] =================================================================================================================== 00:14:53.926 [2024-10-21T09:59:30.521Z] Total : 84.99 254.98 0.00 0.00 16942.60 332.69 114473.36 00:14:53.926 [2024-10-21 09:59:30.515434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.926 [2024-10-21 09:59:30.515495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.926 [2024-10-21 09:59:30.515630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.926 [2024-10-21 09:59:30.515643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:14:53.926 { 00:14:53.926 "results": [ 00:14:53.926 { 00:14:53.926 "job": "raid_bdev1", 00:14:53.926 "core_mask": "0x1", 00:14:53.926 "workload": "randrw", 00:14:53.926 "percentage": 50, 00:14:53.926 "status": "finished", 00:14:53.926 "queue_depth": 2, 00:14:53.926 "io_size": 3145728, 00:14:53.926 "runtime": 8.235865, 00:14:53.926 "iops": 84.99410808700725, 00:14:53.926 "mibps": 254.98232426102174, 00:14:53.926 "io_failed": 0, 00:14:53.926 "io_timeout": 0, 00:14:53.926 "avg_latency_us": 16942.604456643796, 00:14:53.926 "min_latency_us": 332.6882096069869, 00:14:53.926 "max_latency_us": 114473.36244541485 00:14:53.926 } 00:14:53.926 ], 00:14:53.926 "core_count": 1 00:14:53.926 } 00:14:53.926 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.185 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.186 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:54.444 /dev/nbd0 00:14:54.444 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:54.444 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:54.444 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:54.444 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:54.444 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:54.444 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.445 1+0 records in 00:14:54.445 1+0 records out 00:14:54.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495214 s, 8.3 MB/s 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.445 09:59:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:54.704 /dev/nbd1 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.704 1+0 records in 00:14:54.704 1+0 records out 00:14:54.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482929 s, 8.5 MB/s 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.704 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.967 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:55.230 /dev/nbd1 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.230 1+0 records in 00:14:55.230 1+0 records out 00:14:55.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394646 s, 10.4 MB/s 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.230 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:55.488 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:55.488 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.488 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:55.488 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.488 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:55.488 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.488 09:59:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:55.488 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:55.488 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:55.488 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:55.488 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.488 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.488 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:55.488 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:55.488 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.488 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:55.488 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.746 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78388 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 78388 ']' 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 78388 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78388 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.747 killing process with pid 78388 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78388' 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 78388 00:14:55.747 Received shutdown signal, test time was about 10.086225 seconds 00:14:55.747 00:14:55.747 Latency(us) 00:14:55.747 [2024-10-21T09:59:32.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.747 [2024-10-21T09:59:32.342Z] =================================================================================================================== 00:14:55.747 [2024-10-21T09:59:32.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.747 [2024-10-21 09:59:32.339185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.747 09:59:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 78388 00:14:56.312 [2024-10-21 09:59:32.783488] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:57.690 00:14:57.690 real 0m13.746s 00:14:57.690 user 0m17.129s 00:14:57.690 sys 0m2.076s 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.690 ************************************ 00:14:57.690 END TEST raid_rebuild_test_io 00:14:57.690 ************************************ 00:14:57.690 09:59:34 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:57.690 09:59:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:57.690 09:59:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.690 09:59:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.690 ************************************ 00:14:57.690 START TEST raid_rebuild_test_sb_io 00:14:57.690 ************************************ 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78797 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78797 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 78797 ']' 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.690 09:59:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.690 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:57.690 Zero copy mechanism will not be used. 00:14:57.690 [2024-10-21 09:59:34.247488] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:14:57.690 [2024-10-21 09:59:34.247637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78797 ] 00:14:57.949 [2024-10-21 09:59:34.403023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.209 [2024-10-21 09:59:34.545495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.209 [2024-10-21 09:59:34.799157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.209 [2024-10-21 09:59:34.799258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.779 BaseBdev1_malloc 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.779 [2024-10-21 09:59:35.145239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:58.779 [2024-10-21 09:59:35.145314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.779 [2024-10-21 09:59:35.145337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:14:58.779 [2024-10-21 09:59:35.145349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.779 [2024-10-21 09:59:35.147774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.779 [2024-10-21 09:59:35.147810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:58.779 BaseBdev1 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.779 BaseBdev2_malloc 00:14:58.779 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.780 [2024-10-21 09:59:35.210146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:58.780 [2024-10-21 09:59:35.210206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.780 [2024-10-21 09:59:35.210225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:58.780 [2024-10-21 09:59:35.210238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.780 [2024-10-21 09:59:35.212566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.780 [2024-10-21 09:59:35.212629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:58.780 BaseBdev2 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.780 BaseBdev3_malloc 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.780 [2024-10-21 09:59:35.283650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:58.780 [2024-10-21 09:59:35.283706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.780 [2024-10-21 09:59:35.283729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:58.780 [2024-10-21 09:59:35.283741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.780 [2024-10-21 09:59:35.286089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.780 [2024-10-21 09:59:35.286125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:58.780 BaseBdev3 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.780 BaseBdev4_malloc 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.780 [2024-10-21 09:59:35.349820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:58.780 [2024-10-21 09:59:35.349879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.780 [2024-10-21 09:59:35.349903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:58.780 [2024-10-21 09:59:35.349915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.780 [2024-10-21 09:59:35.352498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.780 [2024-10-21 09:59:35.352537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:58.780 BaseBdev4 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.780 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.040 spare_malloc 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.040 spare_delay 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.040 [2024-10-21 09:59:35.430439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.040 [2024-10-21 09:59:35.430505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.040 [2024-10-21 09:59:35.430528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:59.040 [2024-10-21 09:59:35.430540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.040 [2024-10-21 09:59:35.432895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.040 [2024-10-21 09:59:35.432930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.040 spare 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.040 [2024-10-21 09:59:35.442477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.040 [2024-10-21 09:59:35.444516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.040 [2024-10-21 09:59:35.444602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.040 [2024-10-21 09:59:35.444661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.040 [2024-10-21 09:59:35.444847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:14:59.040 [2024-10-21 09:59:35.444869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.040 [2024-10-21 09:59:35.445122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:59.040 [2024-10-21 09:59:35.445311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:14:59.040 [2024-10-21 09:59:35.445328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:14:59.040 [2024-10-21 09:59:35.445498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.040 "name": "raid_bdev1", 00:14:59.040 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:14:59.040 "strip_size_kb": 0, 00:14:59.040 "state": "online", 00:14:59.040 "raid_level": "raid1", 00:14:59.040 "superblock": true, 00:14:59.040 "num_base_bdevs": 4, 00:14:59.040 "num_base_bdevs_discovered": 4, 00:14:59.040 "num_base_bdevs_operational": 4, 00:14:59.040 "base_bdevs_list": [ 00:14:59.040 { 00:14:59.040 "name": "BaseBdev1", 00:14:59.040 "uuid": "bfb9b111-be1d-5da9-aa51-50305692ed33", 00:14:59.040 "is_configured": true, 00:14:59.040 "data_offset": 2048, 00:14:59.040 "data_size": 63488 00:14:59.040 }, 00:14:59.040 { 00:14:59.040 "name": "BaseBdev2", 00:14:59.040 "uuid": "0a0c607e-44ee-5fd1-aeaf-081f65eb26af", 00:14:59.040 "is_configured": true, 00:14:59.040 "data_offset": 2048, 00:14:59.040 "data_size": 63488 00:14:59.040 }, 00:14:59.040 { 00:14:59.040 "name": "BaseBdev3", 00:14:59.040 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:14:59.040 "is_configured": true, 00:14:59.040 "data_offset": 2048, 00:14:59.040 "data_size": 63488 00:14:59.040 }, 00:14:59.040 { 00:14:59.040 "name": "BaseBdev4", 00:14:59.040 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:14:59.040 "is_configured": true, 00:14:59.040 "data_offset": 2048, 00:14:59.040 "data_size": 63488 00:14:59.040 } 00:14:59.040 ] 00:14:59.040 }' 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.040 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.300 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.300 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.300 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.300 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:59.300 [2024-10-21 09:59:35.870144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.300 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.561 [2024-10-21 09:59:35.965612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.561 09:59:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.561 09:59:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.561 "name": "raid_bdev1", 00:14:59.561 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:14:59.561 "strip_size_kb": 0, 00:14:59.561 "state": "online", 00:14:59.561 "raid_level": "raid1", 00:14:59.561 "superblock": true, 00:14:59.561 "num_base_bdevs": 4, 00:14:59.561 "num_base_bdevs_discovered": 3, 00:14:59.561 "num_base_bdevs_operational": 3, 00:14:59.561 "base_bdevs_list": [ 00:14:59.561 { 00:14:59.561 "name": null, 00:14:59.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.561 "is_configured": false, 00:14:59.561 "data_offset": 0, 00:14:59.561 "data_size": 63488 00:14:59.561 }, 00:14:59.561 { 00:14:59.561 "name": "BaseBdev2", 00:14:59.561 "uuid": "0a0c607e-44ee-5fd1-aeaf-081f65eb26af", 00:14:59.561 "is_configured": true, 00:14:59.561 "data_offset": 2048, 00:14:59.561 "data_size": 63488 00:14:59.561 }, 00:14:59.561 { 00:14:59.561 "name": "BaseBdev3", 00:14:59.561 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:14:59.561 "is_configured": true, 00:14:59.561 "data_offset": 2048, 00:14:59.561 "data_size": 63488 00:14:59.561 }, 00:14:59.561 { 00:14:59.561 "name": "BaseBdev4", 00:14:59.561 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:14:59.561 "is_configured": true, 00:14:59.561 "data_offset": 2048, 00:14:59.561 "data_size": 63488 00:14:59.561 } 00:14:59.561 ] 00:14:59.561 }' 00:14:59.561 09:59:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.561 09:59:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.561 [2024-10-21 09:59:36.071118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:59.561 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:59.561 Zero copy mechanism will not be used. 00:14:59.561 Running I/O for 60 seconds... 00:15:00.131 09:59:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.131 09:59:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.131 09:59:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.131 [2024-10-21 09:59:36.431507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.131 09:59:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.131 09:59:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:00.131 [2024-10-21 09:59:36.513203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:00.131 [2024-10-21 09:59:36.515647] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.131 [2024-10-21 09:59:36.626479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:00.131 [2024-10-21 09:59:36.627487] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:00.390 [2024-10-21 09:59:36.840961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:00.390 [2024-10-21 09:59:36.841491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:00.649 125.00 IOPS, 375.00 MiB/s [2024-10-21T09:59:37.244Z] [2024-10-21 09:59:37.189817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:00.909 [2024-10-21 09:59:37.403639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:00.909 [2024-10-21 09:59:37.404045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:00.909 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.909 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.909 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.909 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.909 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.909 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.909 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.909 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.909 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.174 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.174 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.174 "name": "raid_bdev1", 00:15:01.174 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:01.174 "strip_size_kb": 0, 00:15:01.174 "state": "online", 00:15:01.174 "raid_level": "raid1", 00:15:01.174 "superblock": true, 00:15:01.174 "num_base_bdevs": 4, 00:15:01.174 "num_base_bdevs_discovered": 4, 00:15:01.174 "num_base_bdevs_operational": 4, 00:15:01.174 "process": { 00:15:01.174 "type": "rebuild", 00:15:01.174 "target": "spare", 00:15:01.174 "progress": { 00:15:01.174 "blocks": 10240, 00:15:01.174 "percent": 16 00:15:01.174 } 00:15:01.174 }, 00:15:01.174 "base_bdevs_list": [ 00:15:01.174 { 00:15:01.174 "name": "spare", 00:15:01.174 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:01.174 "is_configured": true, 00:15:01.174 "data_offset": 2048, 00:15:01.174 "data_size": 63488 00:15:01.174 }, 00:15:01.174 { 00:15:01.174 "name": "BaseBdev2", 00:15:01.174 "uuid": "0a0c607e-44ee-5fd1-aeaf-081f65eb26af", 00:15:01.174 "is_configured": true, 00:15:01.174 "data_offset": 2048, 00:15:01.174 "data_size": 63488 00:15:01.174 }, 00:15:01.174 { 00:15:01.174 "name": "BaseBdev3", 00:15:01.174 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:01.174 "is_configured": true, 00:15:01.174 "data_offset": 2048, 00:15:01.174 "data_size": 63488 00:15:01.174 }, 00:15:01.174 { 00:15:01.174 "name": "BaseBdev4", 00:15:01.174 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:01.174 "is_configured": true, 00:15:01.174 "data_offset": 2048, 00:15:01.174 "data_size": 63488 00:15:01.174 } 00:15:01.174 ] 00:15:01.174 }' 00:15:01.174 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.174 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.174 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.174 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.175 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:01.175 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.175 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.175 [2024-10-21 09:59:37.645314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.175 [2024-10-21 09:59:37.750289] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:01.434 [2024-10-21 09:59:37.771185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.434 [2024-10-21 09:59:37.771277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.435 [2024-10-21 09:59:37.771294] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:01.435 [2024-10-21 09:59:37.803478] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.435 "name": "raid_bdev1", 00:15:01.435 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:01.435 "strip_size_kb": 0, 00:15:01.435 "state": "online", 00:15:01.435 "raid_level": "raid1", 00:15:01.435 "superblock": true, 00:15:01.435 "num_base_bdevs": 4, 00:15:01.435 "num_base_bdevs_discovered": 3, 00:15:01.435 "num_base_bdevs_operational": 3, 00:15:01.435 "base_bdevs_list": [ 00:15:01.435 { 00:15:01.435 "name": null, 00:15:01.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.435 "is_configured": false, 00:15:01.435 "data_offset": 0, 00:15:01.435 "data_size": 63488 00:15:01.435 }, 00:15:01.435 { 00:15:01.435 "name": "BaseBdev2", 00:15:01.435 "uuid": "0a0c607e-44ee-5fd1-aeaf-081f65eb26af", 00:15:01.435 "is_configured": true, 00:15:01.435 "data_offset": 2048, 00:15:01.435 "data_size": 63488 00:15:01.435 }, 00:15:01.435 { 00:15:01.435 "name": "BaseBdev3", 00:15:01.435 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:01.435 "is_configured": true, 00:15:01.435 "data_offset": 2048, 00:15:01.435 "data_size": 63488 00:15:01.435 }, 00:15:01.435 { 00:15:01.435 "name": "BaseBdev4", 00:15:01.435 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:01.435 "is_configured": true, 00:15:01.435 "data_offset": 2048, 00:15:01.435 "data_size": 63488 00:15:01.435 } 00:15:01.435 ] 00:15:01.435 }' 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.435 09:59:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.695 119.50 IOPS, 358.50 MiB/s [2024-10-21T09:59:38.290Z] 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:01.695 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.695 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:01.695 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:01.695 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.695 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.695 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.695 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.695 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.695 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.954 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.954 "name": "raid_bdev1", 00:15:01.954 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:01.954 "strip_size_kb": 0, 00:15:01.954 "state": "online", 00:15:01.954 "raid_level": "raid1", 00:15:01.954 "superblock": true, 00:15:01.954 "num_base_bdevs": 4, 00:15:01.954 "num_base_bdevs_discovered": 3, 00:15:01.954 "num_base_bdevs_operational": 3, 00:15:01.954 "base_bdevs_list": [ 00:15:01.954 { 00:15:01.954 "name": null, 00:15:01.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.954 "is_configured": false, 00:15:01.954 "data_offset": 0, 00:15:01.954 "data_size": 63488 00:15:01.954 }, 00:15:01.954 { 00:15:01.954 "name": "BaseBdev2", 00:15:01.954 "uuid": "0a0c607e-44ee-5fd1-aeaf-081f65eb26af", 00:15:01.954 "is_configured": true, 00:15:01.954 "data_offset": 2048, 00:15:01.954 "data_size": 63488 00:15:01.954 }, 00:15:01.954 { 00:15:01.954 "name": "BaseBdev3", 00:15:01.954 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:01.954 "is_configured": true, 00:15:01.954 "data_offset": 2048, 00:15:01.954 "data_size": 63488 00:15:01.954 }, 00:15:01.954 { 00:15:01.954 "name": "BaseBdev4", 00:15:01.954 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:01.954 "is_configured": true, 00:15:01.954 "data_offset": 2048, 00:15:01.954 "data_size": 63488 00:15:01.954 } 00:15:01.954 ] 00:15:01.954 }' 00:15:01.954 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.954 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:01.954 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.954 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:01.954 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.954 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.954 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.954 [2024-10-21 09:59:38.395199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.954 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.954 09:59:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:01.954 [2024-10-21 09:59:38.475634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:01.954 [2024-10-21 09:59:38.477890] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.214 [2024-10-21 09:59:38.602217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:02.214 [2024-10-21 09:59:38.604515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:02.472 [2024-10-21 09:59:38.824921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:02.472 [2024-10-21 09:59:38.826188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:02.731 122.00 IOPS, 366.00 MiB/s [2024-10-21T09:59:39.326Z] [2024-10-21 09:59:39.156196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:02.991 [2024-10-21 09:59:39.360341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.991 "name": "raid_bdev1", 00:15:02.991 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:02.991 "strip_size_kb": 0, 00:15:02.991 "state": "online", 00:15:02.991 "raid_level": "raid1", 00:15:02.991 "superblock": true, 00:15:02.991 "num_base_bdevs": 4, 00:15:02.991 "num_base_bdevs_discovered": 4, 00:15:02.991 "num_base_bdevs_operational": 4, 00:15:02.991 "process": { 00:15:02.991 "type": "rebuild", 00:15:02.991 "target": "spare", 00:15:02.991 "progress": { 00:15:02.991 "blocks": 10240, 00:15:02.991 "percent": 16 00:15:02.991 } 00:15:02.991 }, 00:15:02.991 "base_bdevs_list": [ 00:15:02.991 { 00:15:02.991 "name": "spare", 00:15:02.991 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:02.991 "is_configured": true, 00:15:02.991 "data_offset": 2048, 00:15:02.991 "data_size": 63488 00:15:02.991 }, 00:15:02.991 { 00:15:02.991 "name": "BaseBdev2", 00:15:02.991 "uuid": "0a0c607e-44ee-5fd1-aeaf-081f65eb26af", 00:15:02.991 "is_configured": true, 00:15:02.991 "data_offset": 2048, 00:15:02.991 "data_size": 63488 00:15:02.991 }, 00:15:02.991 { 00:15:02.991 "name": "BaseBdev3", 00:15:02.991 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:02.991 "is_configured": true, 00:15:02.991 "data_offset": 2048, 00:15:02.991 "data_size": 63488 00:15:02.991 }, 00:15:02.991 { 00:15:02.991 "name": "BaseBdev4", 00:15:02.991 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:02.991 "is_configured": true, 00:15:02.991 "data_offset": 2048, 00:15:02.991 "data_size": 63488 00:15:02.991 } 00:15:02.991 ] 00:15:02.991 }' 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:02.991 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.991 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.991 [2024-10-21 09:59:39.583863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.251 [2024-10-21 09:59:39.596034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:03.251 [2024-10-21 09:59:39.598396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:03.251 [2024-10-21 09:59:39.806974] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005fb0 00:15:03.251 [2024-10-21 09:59:39.807061] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.251 [2024-10-21 09:59:39.819881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.251 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.511 "name": "raid_bdev1", 00:15:03.511 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:03.511 "strip_size_kb": 0, 00:15:03.511 "state": "online", 00:15:03.511 "raid_level": "raid1", 00:15:03.511 "superblock": true, 00:15:03.511 "num_base_bdevs": 4, 00:15:03.511 "num_base_bdevs_discovered": 3, 00:15:03.511 "num_base_bdevs_operational": 3, 00:15:03.511 "process": { 00:15:03.511 "type": "rebuild", 00:15:03.511 "target": "spare", 00:15:03.511 "progress": { 00:15:03.511 "blocks": 14336, 00:15:03.511 "percent": 22 00:15:03.511 } 00:15:03.511 }, 00:15:03.511 "base_bdevs_list": [ 00:15:03.511 { 00:15:03.511 "name": "spare", 00:15:03.511 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:03.511 "is_configured": true, 00:15:03.511 "data_offset": 2048, 00:15:03.511 "data_size": 63488 00:15:03.511 }, 00:15:03.511 { 00:15:03.511 "name": null, 00:15:03.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.511 "is_configured": false, 00:15:03.511 "data_offset": 0, 00:15:03.511 "data_size": 63488 00:15:03.511 }, 00:15:03.511 { 00:15:03.511 "name": "BaseBdev3", 00:15:03.511 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:03.511 "is_configured": true, 00:15:03.511 "data_offset": 2048, 00:15:03.511 "data_size": 63488 00:15:03.511 }, 00:15:03.511 { 00:15:03.511 "name": "BaseBdev4", 00:15:03.511 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:03.511 "is_configured": true, 00:15:03.511 "data_offset": 2048, 00:15:03.511 "data_size": 63488 00:15:03.511 } 00:15:03.511 ] 00:15:03.511 }' 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=506 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.511 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.512 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.512 09:59:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.512 09:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.512 "name": "raid_bdev1", 00:15:03.512 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:03.512 "strip_size_kb": 0, 00:15:03.512 "state": "online", 00:15:03.512 "raid_level": "raid1", 00:15:03.512 "superblock": true, 00:15:03.512 "num_base_bdevs": 4, 00:15:03.512 "num_base_bdevs_discovered": 3, 00:15:03.512 "num_base_bdevs_operational": 3, 00:15:03.512 "process": { 00:15:03.512 "type": "rebuild", 00:15:03.512 "target": "spare", 00:15:03.512 "progress": { 00:15:03.512 "blocks": 14336, 00:15:03.512 "percent": 22 00:15:03.512 } 00:15:03.512 }, 00:15:03.512 "base_bdevs_list": [ 00:15:03.512 { 00:15:03.512 "name": "spare", 00:15:03.512 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:03.512 "is_configured": true, 00:15:03.512 "data_offset": 2048, 00:15:03.512 "data_size": 63488 00:15:03.512 }, 00:15:03.512 { 00:15:03.512 "name": null, 00:15:03.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.512 "is_configured": false, 00:15:03.512 "data_offset": 0, 00:15:03.512 "data_size": 63488 00:15:03.512 }, 00:15:03.512 { 00:15:03.512 "name": "BaseBdev3", 00:15:03.512 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:03.512 "is_configured": true, 00:15:03.512 "data_offset": 2048, 00:15:03.512 "data_size": 63488 00:15:03.512 }, 00:15:03.512 { 00:15:03.512 "name": "BaseBdev4", 00:15:03.512 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:03.512 "is_configured": true, 00:15:03.512 "data_offset": 2048, 00:15:03.512 "data_size": 63488 00:15:03.512 } 00:15:03.512 ] 00:15:03.512 }' 00:15:03.512 09:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.512 [2024-10-21 09:59:40.031345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:03.512 [2024-10-21 09:59:40.031896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:03.512 09:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.512 09:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.771 104.50 IOPS, 313.50 MiB/s [2024-10-21T09:59:40.366Z] 09:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.771 09:59:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.031 [2024-10-21 09:59:40.596040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:04.599 94.20 IOPS, 282.60 MiB/s [2024-10-21T09:59:41.194Z] 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.599 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.599 "name": "raid_bdev1", 00:15:04.599 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:04.599 "strip_size_kb": 0, 00:15:04.599 "state": "online", 00:15:04.600 "raid_level": "raid1", 00:15:04.600 "superblock": true, 00:15:04.600 "num_base_bdevs": 4, 00:15:04.600 "num_base_bdevs_discovered": 3, 00:15:04.600 "num_base_bdevs_operational": 3, 00:15:04.600 "process": { 00:15:04.600 "type": "rebuild", 00:15:04.600 "target": "spare", 00:15:04.600 "progress": { 00:15:04.600 "blocks": 32768, 00:15:04.600 "percent": 51 00:15:04.600 } 00:15:04.600 }, 00:15:04.600 "base_bdevs_list": [ 00:15:04.600 { 00:15:04.600 "name": "spare", 00:15:04.600 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:04.600 "is_configured": true, 00:15:04.600 "data_offset": 2048, 00:15:04.600 "data_size": 63488 00:15:04.600 }, 00:15:04.600 { 00:15:04.600 "name": null, 00:15:04.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.600 "is_configured": false, 00:15:04.600 "data_offset": 0, 00:15:04.600 "data_size": 63488 00:15:04.600 }, 00:15:04.600 { 00:15:04.600 "name": "BaseBdev3", 00:15:04.600 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:04.600 "is_configured": true, 00:15:04.600 "data_offset": 2048, 00:15:04.600 "data_size": 63488 00:15:04.600 }, 00:15:04.600 { 00:15:04.600 "name": "BaseBdev4", 00:15:04.600 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:04.600 "is_configured": true, 00:15:04.600 "data_offset": 2048, 00:15:04.600 "data_size": 63488 00:15:04.600 } 00:15:04.600 ] 00:15:04.600 }' 00:15:04.600 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.859 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.859 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.859 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.859 09:59:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.859 [2024-10-21 09:59:41.411295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:05.427 [2024-10-21 09:59:41.879175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:05.686 84.00 IOPS, 252.00 MiB/s [2024-10-21T09:59:42.281Z] 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.686 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.686 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.686 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.686 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.686 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.686 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.686 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.686 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.686 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.945 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.945 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.945 "name": "raid_bdev1", 00:15:05.945 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:05.945 "strip_size_kb": 0, 00:15:05.945 "state": "online", 00:15:05.945 "raid_level": "raid1", 00:15:05.945 "superblock": true, 00:15:05.945 "num_base_bdevs": 4, 00:15:05.945 "num_base_bdevs_discovered": 3, 00:15:05.945 "num_base_bdevs_operational": 3, 00:15:05.945 "process": { 00:15:05.945 "type": "rebuild", 00:15:05.945 "target": "spare", 00:15:05.945 "progress": { 00:15:05.945 "blocks": 51200, 00:15:05.945 "percent": 80 00:15:05.945 } 00:15:05.945 }, 00:15:05.945 "base_bdevs_list": [ 00:15:05.945 { 00:15:05.945 "name": "spare", 00:15:05.945 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:05.945 "is_configured": true, 00:15:05.945 "data_offset": 2048, 00:15:05.945 "data_size": 63488 00:15:05.945 }, 00:15:05.945 { 00:15:05.945 "name": null, 00:15:05.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.945 "is_configured": false, 00:15:05.945 "data_offset": 0, 00:15:05.945 "data_size": 63488 00:15:05.945 }, 00:15:05.945 { 00:15:05.945 "name": "BaseBdev3", 00:15:05.945 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:05.945 "is_configured": true, 00:15:05.945 "data_offset": 2048, 00:15:05.945 "data_size": 63488 00:15:05.945 }, 00:15:05.945 { 00:15:05.945 "name": "BaseBdev4", 00:15:05.945 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:05.945 "is_configured": true, 00:15:05.945 "data_offset": 2048, 00:15:05.945 "data_size": 63488 00:15:05.945 } 00:15:05.945 ] 00:15:05.945 }' 00:15:05.945 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.945 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.945 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.945 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.945 09:59:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.204 [2024-10-21 09:59:42.647775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:06.463 [2024-10-21 09:59:42.980873] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:06.723 77.71 IOPS, 233.14 MiB/s [2024-10-21T09:59:43.318Z] [2024-10-21 09:59:43.085132] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:06.723 [2024-10-21 09:59:43.089204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.983 "name": "raid_bdev1", 00:15:06.983 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:06.983 "strip_size_kb": 0, 00:15:06.983 "state": "online", 00:15:06.983 "raid_level": "raid1", 00:15:06.983 "superblock": true, 00:15:06.983 "num_base_bdevs": 4, 00:15:06.983 "num_base_bdevs_discovered": 3, 00:15:06.983 "num_base_bdevs_operational": 3, 00:15:06.983 "base_bdevs_list": [ 00:15:06.983 { 00:15:06.983 "name": "spare", 00:15:06.983 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:06.983 "is_configured": true, 00:15:06.983 "data_offset": 2048, 00:15:06.983 "data_size": 63488 00:15:06.983 }, 00:15:06.983 { 00:15:06.983 "name": null, 00:15:06.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.983 "is_configured": false, 00:15:06.983 "data_offset": 0, 00:15:06.983 "data_size": 63488 00:15:06.983 }, 00:15:06.983 { 00:15:06.983 "name": "BaseBdev3", 00:15:06.983 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:06.983 "is_configured": true, 00:15:06.983 "data_offset": 2048, 00:15:06.983 "data_size": 63488 00:15:06.983 }, 00:15:06.983 { 00:15:06.983 "name": "BaseBdev4", 00:15:06.983 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:06.983 "is_configured": true, 00:15:06.983 "data_offset": 2048, 00:15:06.983 "data_size": 63488 00:15:06.983 } 00:15:06.983 ] 00:15:06.983 }' 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:06.983 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.984 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.984 "name": "raid_bdev1", 00:15:06.984 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:06.984 "strip_size_kb": 0, 00:15:06.984 "state": "online", 00:15:06.984 "raid_level": "raid1", 00:15:06.984 "superblock": true, 00:15:06.984 "num_base_bdevs": 4, 00:15:06.984 "num_base_bdevs_discovered": 3, 00:15:06.984 "num_base_bdevs_operational": 3, 00:15:06.984 "base_bdevs_list": [ 00:15:06.984 { 00:15:06.984 "name": "spare", 00:15:06.984 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:06.984 "is_configured": true, 00:15:06.984 "data_offset": 2048, 00:15:06.984 "data_size": 63488 00:15:06.984 }, 00:15:06.984 { 00:15:06.984 "name": null, 00:15:06.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.984 "is_configured": false, 00:15:06.984 "data_offset": 0, 00:15:06.984 "data_size": 63488 00:15:06.984 }, 00:15:06.984 { 00:15:06.984 "name": "BaseBdev3", 00:15:06.984 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:06.984 "is_configured": true, 00:15:06.984 "data_offset": 2048, 00:15:06.984 "data_size": 63488 00:15:06.984 }, 00:15:06.984 { 00:15:06.984 "name": "BaseBdev4", 00:15:06.984 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:06.984 "is_configured": true, 00:15:06.984 "data_offset": 2048, 00:15:06.984 "data_size": 63488 00:15:06.984 } 00:15:06.984 ] 00:15:06.984 }' 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.254 "name": "raid_bdev1", 00:15:07.254 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:07.254 "strip_size_kb": 0, 00:15:07.254 "state": "online", 00:15:07.254 "raid_level": "raid1", 00:15:07.254 "superblock": true, 00:15:07.254 "num_base_bdevs": 4, 00:15:07.254 "num_base_bdevs_discovered": 3, 00:15:07.254 "num_base_bdevs_operational": 3, 00:15:07.254 "base_bdevs_list": [ 00:15:07.254 { 00:15:07.254 "name": "spare", 00:15:07.254 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:07.254 "is_configured": true, 00:15:07.254 "data_offset": 2048, 00:15:07.254 "data_size": 63488 00:15:07.254 }, 00:15:07.254 { 00:15:07.254 "name": null, 00:15:07.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.254 "is_configured": false, 00:15:07.254 "data_offset": 0, 00:15:07.254 "data_size": 63488 00:15:07.254 }, 00:15:07.254 { 00:15:07.254 "name": "BaseBdev3", 00:15:07.254 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:07.254 "is_configured": true, 00:15:07.254 "data_offset": 2048, 00:15:07.254 "data_size": 63488 00:15:07.254 }, 00:15:07.254 { 00:15:07.254 "name": "BaseBdev4", 00:15:07.254 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:07.254 "is_configured": true, 00:15:07.254 "data_offset": 2048, 00:15:07.254 "data_size": 63488 00:15:07.254 } 00:15:07.254 ] 00:15:07.254 }' 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.254 09:59:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.529 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.529 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.529 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.529 [2024-10-21 09:59:44.072664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.529 [2024-10-21 09:59:44.072706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.788 71.75 IOPS, 215.25 MiB/s 00:15:07.788 Latency(us) 00:15:07.788 [2024-10-21T09:59:44.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.788 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:07.788 raid_bdev1 : 8.11 70.98 212.95 0.00 0.00 19486.16 327.32 118136.51 00:15:07.788 [2024-10-21T09:59:44.383Z] =================================================================================================================== 00:15:07.788 [2024-10-21T09:59:44.383Z] Total : 70.98 212.95 0.00 0.00 19486.16 327.32 118136.51 00:15:07.788 [2024-10-21 09:59:44.194099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.788 [2024-10-21 09:59:44.194151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.788 [2024-10-21 09:59:44.194258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.788 [2024-10-21 09:59:44.194272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:15:07.788 { 00:15:07.788 "results": [ 00:15:07.788 { 00:15:07.788 "job": "raid_bdev1", 00:15:07.788 "core_mask": "0x1", 00:15:07.788 "workload": "randrw", 00:15:07.788 "percentage": 50, 00:15:07.788 "status": "finished", 00:15:07.788 "queue_depth": 2, 00:15:07.788 "io_size": 3145728, 00:15:07.788 "runtime": 8.114531, 00:15:07.788 "iops": 70.98376973358042, 00:15:07.788 "mibps": 212.95130920074126, 00:15:07.788 "io_failed": 0, 00:15:07.788 "io_timeout": 0, 00:15:07.788 "avg_latency_us": 19486.160116448325, 00:15:07.788 "min_latency_us": 327.32227074235806, 00:15:07.788 "max_latency_us": 118136.51004366812 00:15:07.788 } 00:15:07.788 ], 00:15:07.788 "core_count": 1 00:15:07.788 } 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.788 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:08.047 /dev/nbd0 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.047 1+0 records in 00:15:08.047 1+0 records out 00:15:08.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587904 s, 7.0 MB/s 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.047 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.048 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:08.306 /dev/nbd1 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:08.306 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.306 1+0 records in 00:15:08.306 1+0 records out 00:15:08.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406766 s, 10.1 MB/s 00:15:08.307 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.307 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:08.307 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.307 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:08.307 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:08.307 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.307 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.307 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:08.566 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:08.566 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.566 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:08.566 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.566 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:08.566 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.566 09:59:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:08.825 /dev/nbd1 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:08.825 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.085 1+0 records in 00:15:09.085 1+0 records out 00:15:09.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441261 s, 9.3 MB/s 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.085 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.344 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 [2024-10-21 09:59:45.974631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:09.604 [2024-10-21 09:59:45.974713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.604 [2024-10-21 09:59:45.974738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:09.604 [2024-10-21 09:59:45.974750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.604 [2024-10-21 09:59:45.977370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.604 [2024-10-21 09:59:45.977414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:09.604 [2024-10-21 09:59:45.977533] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:09.604 [2024-10-21 09:59:45.977632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.604 [2024-10-21 09:59:45.977799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.604 [2024-10-21 09:59:45.977913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:09.604 spare 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.604 09:59:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 [2024-10-21 09:59:46.077836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:15:09.604 [2024-10-21 09:59:46.077925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:09.604 [2024-10-21 09:59:46.078382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036ef0 00:15:09.604 [2024-10-21 09:59:46.078674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:15:09.604 [2024-10-21 09:59:46.078690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005f00 00:15:09.604 [2024-10-21 09:59:46.078969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.604 "name": "raid_bdev1", 00:15:09.604 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:09.604 "strip_size_kb": 0, 00:15:09.604 "state": "online", 00:15:09.604 "raid_level": "raid1", 00:15:09.604 "superblock": true, 00:15:09.604 "num_base_bdevs": 4, 00:15:09.604 "num_base_bdevs_discovered": 3, 00:15:09.604 "num_base_bdevs_operational": 3, 00:15:09.604 "base_bdevs_list": [ 00:15:09.604 { 00:15:09.604 "name": "spare", 00:15:09.604 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:09.604 "is_configured": true, 00:15:09.604 "data_offset": 2048, 00:15:09.604 "data_size": 63488 00:15:09.604 }, 00:15:09.604 { 00:15:09.604 "name": null, 00:15:09.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.604 "is_configured": false, 00:15:09.604 "data_offset": 2048, 00:15:09.604 "data_size": 63488 00:15:09.604 }, 00:15:09.604 { 00:15:09.604 "name": "BaseBdev3", 00:15:09.604 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:09.604 "is_configured": true, 00:15:09.604 "data_offset": 2048, 00:15:09.604 "data_size": 63488 00:15:09.604 }, 00:15:09.604 { 00:15:09.604 "name": "BaseBdev4", 00:15:09.604 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:09.604 "is_configured": true, 00:15:09.604 "data_offset": 2048, 00:15:09.604 "data_size": 63488 00:15:09.604 } 00:15:09.604 ] 00:15:09.604 }' 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.604 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.173 "name": "raid_bdev1", 00:15:10.173 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:10.173 "strip_size_kb": 0, 00:15:10.173 "state": "online", 00:15:10.173 "raid_level": "raid1", 00:15:10.173 "superblock": true, 00:15:10.173 "num_base_bdevs": 4, 00:15:10.173 "num_base_bdevs_discovered": 3, 00:15:10.173 "num_base_bdevs_operational": 3, 00:15:10.173 "base_bdevs_list": [ 00:15:10.173 { 00:15:10.173 "name": "spare", 00:15:10.173 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:10.173 "is_configured": true, 00:15:10.173 "data_offset": 2048, 00:15:10.173 "data_size": 63488 00:15:10.173 }, 00:15:10.173 { 00:15:10.173 "name": null, 00:15:10.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.173 "is_configured": false, 00:15:10.173 "data_offset": 2048, 00:15:10.173 "data_size": 63488 00:15:10.173 }, 00:15:10.173 { 00:15:10.173 "name": "BaseBdev3", 00:15:10.173 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:10.173 "is_configured": true, 00:15:10.173 "data_offset": 2048, 00:15:10.173 "data_size": 63488 00:15:10.173 }, 00:15:10.173 { 00:15:10.173 "name": "BaseBdev4", 00:15:10.173 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:10.173 "is_configured": true, 00:15:10.173 "data_offset": 2048, 00:15:10.173 "data_size": 63488 00:15:10.173 } 00:15:10.173 ] 00:15:10.173 }' 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.173 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.433 [2024-10-21 09:59:46.777841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.433 "name": "raid_bdev1", 00:15:10.433 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:10.433 "strip_size_kb": 0, 00:15:10.433 "state": "online", 00:15:10.433 "raid_level": "raid1", 00:15:10.433 "superblock": true, 00:15:10.433 "num_base_bdevs": 4, 00:15:10.433 "num_base_bdevs_discovered": 2, 00:15:10.433 "num_base_bdevs_operational": 2, 00:15:10.433 "base_bdevs_list": [ 00:15:10.433 { 00:15:10.433 "name": null, 00:15:10.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.433 "is_configured": false, 00:15:10.433 "data_offset": 0, 00:15:10.433 "data_size": 63488 00:15:10.433 }, 00:15:10.433 { 00:15:10.433 "name": null, 00:15:10.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.433 "is_configured": false, 00:15:10.433 "data_offset": 2048, 00:15:10.433 "data_size": 63488 00:15:10.433 }, 00:15:10.433 { 00:15:10.433 "name": "BaseBdev3", 00:15:10.433 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:10.433 "is_configured": true, 00:15:10.433 "data_offset": 2048, 00:15:10.433 "data_size": 63488 00:15:10.433 }, 00:15:10.433 { 00:15:10.433 "name": "BaseBdev4", 00:15:10.433 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:10.433 "is_configured": true, 00:15:10.433 "data_offset": 2048, 00:15:10.433 "data_size": 63488 00:15:10.433 } 00:15:10.433 ] 00:15:10.433 }' 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.433 09:59:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.694 09:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.694 09:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.694 09:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.694 [2024-10-21 09:59:47.241133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.694 [2024-10-21 09:59:47.241382] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:10.694 [2024-10-21 09:59:47.241409] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:10.694 [2024-10-21 09:59:47.241447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.694 [2024-10-21 09:59:47.257611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:15:10.694 09:59:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.694 09:59:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:10.694 [2024-10-21 09:59:47.259760] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.073 "name": "raid_bdev1", 00:15:12.073 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:12.073 "strip_size_kb": 0, 00:15:12.073 "state": "online", 00:15:12.073 "raid_level": "raid1", 00:15:12.073 "superblock": true, 00:15:12.073 "num_base_bdevs": 4, 00:15:12.073 "num_base_bdevs_discovered": 3, 00:15:12.073 "num_base_bdevs_operational": 3, 00:15:12.073 "process": { 00:15:12.073 "type": "rebuild", 00:15:12.073 "target": "spare", 00:15:12.073 "progress": { 00:15:12.073 "blocks": 20480, 00:15:12.073 "percent": 32 00:15:12.073 } 00:15:12.073 }, 00:15:12.073 "base_bdevs_list": [ 00:15:12.073 { 00:15:12.073 "name": "spare", 00:15:12.073 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:12.073 "is_configured": true, 00:15:12.073 "data_offset": 2048, 00:15:12.073 "data_size": 63488 00:15:12.073 }, 00:15:12.073 { 00:15:12.073 "name": null, 00:15:12.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.073 "is_configured": false, 00:15:12.073 "data_offset": 2048, 00:15:12.073 "data_size": 63488 00:15:12.073 }, 00:15:12.073 { 00:15:12.073 "name": "BaseBdev3", 00:15:12.073 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:12.073 "is_configured": true, 00:15:12.073 "data_offset": 2048, 00:15:12.073 "data_size": 63488 00:15:12.073 }, 00:15:12.073 { 00:15:12.073 "name": "BaseBdev4", 00:15:12.073 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:12.073 "is_configured": true, 00:15:12.073 "data_offset": 2048, 00:15:12.073 "data_size": 63488 00:15:12.073 } 00:15:12.073 ] 00:15:12.073 }' 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.073 [2024-10-21 09:59:48.407831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.073 [2024-10-21 09:59:48.469500] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:12.073 [2024-10-21 09:59:48.469644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.073 [2024-10-21 09:59:48.469663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.073 [2024-10-21 09:59:48.469674] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.073 "name": "raid_bdev1", 00:15:12.073 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:12.073 "strip_size_kb": 0, 00:15:12.073 "state": "online", 00:15:12.073 "raid_level": "raid1", 00:15:12.073 "superblock": true, 00:15:12.073 "num_base_bdevs": 4, 00:15:12.073 "num_base_bdevs_discovered": 2, 00:15:12.073 "num_base_bdevs_operational": 2, 00:15:12.073 "base_bdevs_list": [ 00:15:12.073 { 00:15:12.073 "name": null, 00:15:12.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.073 "is_configured": false, 00:15:12.073 "data_offset": 0, 00:15:12.073 "data_size": 63488 00:15:12.073 }, 00:15:12.073 { 00:15:12.073 "name": null, 00:15:12.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.073 "is_configured": false, 00:15:12.073 "data_offset": 2048, 00:15:12.073 "data_size": 63488 00:15:12.073 }, 00:15:12.073 { 00:15:12.073 "name": "BaseBdev3", 00:15:12.073 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:12.073 "is_configured": true, 00:15:12.073 "data_offset": 2048, 00:15:12.073 "data_size": 63488 00:15:12.073 }, 00:15:12.073 { 00:15:12.073 "name": "BaseBdev4", 00:15:12.073 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:12.073 "is_configured": true, 00:15:12.073 "data_offset": 2048, 00:15:12.073 "data_size": 63488 00:15:12.073 } 00:15:12.073 ] 00:15:12.073 }' 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.073 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.640 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.640 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.640 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.640 [2024-10-21 09:59:48.980314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.640 [2024-10-21 09:59:48.980394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.640 [2024-10-21 09:59:48.980419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:12.640 [2024-10-21 09:59:48.980431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.640 [2024-10-21 09:59:48.981032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.640 [2024-10-21 09:59:48.981056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.640 [2024-10-21 09:59:48.981167] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:12.640 [2024-10-21 09:59:48.981186] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:12.640 [2024-10-21 09:59:48.981197] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:12.640 [2024-10-21 09:59:48.981221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.640 [2024-10-21 09:59:48.997444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:15:12.640 spare 00:15:12.640 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.640 09:59:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:12.640 [2024-10-21 09:59:48.999645] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.579 "name": "raid_bdev1", 00:15:13.579 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:13.579 "strip_size_kb": 0, 00:15:13.579 "state": "online", 00:15:13.579 "raid_level": "raid1", 00:15:13.579 "superblock": true, 00:15:13.579 "num_base_bdevs": 4, 00:15:13.579 "num_base_bdevs_discovered": 3, 00:15:13.579 "num_base_bdevs_operational": 3, 00:15:13.579 "process": { 00:15:13.579 "type": "rebuild", 00:15:13.579 "target": "spare", 00:15:13.579 "progress": { 00:15:13.579 "blocks": 20480, 00:15:13.579 "percent": 32 00:15:13.579 } 00:15:13.579 }, 00:15:13.579 "base_bdevs_list": [ 00:15:13.579 { 00:15:13.579 "name": "spare", 00:15:13.579 "uuid": "55c58184-7454-5b0a-9c0c-d9519f6b83b4", 00:15:13.579 "is_configured": true, 00:15:13.579 "data_offset": 2048, 00:15:13.579 "data_size": 63488 00:15:13.579 }, 00:15:13.579 { 00:15:13.579 "name": null, 00:15:13.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.579 "is_configured": false, 00:15:13.579 "data_offset": 2048, 00:15:13.579 "data_size": 63488 00:15:13.579 }, 00:15:13.579 { 00:15:13.579 "name": "BaseBdev3", 00:15:13.579 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:13.579 "is_configured": true, 00:15:13.579 "data_offset": 2048, 00:15:13.579 "data_size": 63488 00:15:13.579 }, 00:15:13.579 { 00:15:13.579 "name": "BaseBdev4", 00:15:13.579 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:13.579 "is_configured": true, 00:15:13.579 "data_offset": 2048, 00:15:13.579 "data_size": 63488 00:15:13.579 } 00:15:13.579 ] 00:15:13.579 }' 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.579 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.579 [2024-10-21 09:59:50.158811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.838 [2024-10-21 09:59:50.209537] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.838 [2024-10-21 09:59:50.209679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.838 [2024-10-21 09:59:50.209702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.838 [2024-10-21 09:59:50.209710] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.838 "name": "raid_bdev1", 00:15:13.838 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:13.838 "strip_size_kb": 0, 00:15:13.838 "state": "online", 00:15:13.838 "raid_level": "raid1", 00:15:13.838 "superblock": true, 00:15:13.838 "num_base_bdevs": 4, 00:15:13.838 "num_base_bdevs_discovered": 2, 00:15:13.838 "num_base_bdevs_operational": 2, 00:15:13.838 "base_bdevs_list": [ 00:15:13.838 { 00:15:13.838 "name": null, 00:15:13.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.838 "is_configured": false, 00:15:13.838 "data_offset": 0, 00:15:13.838 "data_size": 63488 00:15:13.838 }, 00:15:13.838 { 00:15:13.838 "name": null, 00:15:13.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.838 "is_configured": false, 00:15:13.838 "data_offset": 2048, 00:15:13.838 "data_size": 63488 00:15:13.838 }, 00:15:13.838 { 00:15:13.838 "name": "BaseBdev3", 00:15:13.838 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:13.838 "is_configured": true, 00:15:13.838 "data_offset": 2048, 00:15:13.838 "data_size": 63488 00:15:13.838 }, 00:15:13.838 { 00:15:13.838 "name": "BaseBdev4", 00:15:13.838 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:13.838 "is_configured": true, 00:15:13.838 "data_offset": 2048, 00:15:13.838 "data_size": 63488 00:15:13.838 } 00:15:13.838 ] 00:15:13.838 }' 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.838 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.097 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.097 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.097 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.097 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.097 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.097 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.357 "name": "raid_bdev1", 00:15:14.357 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:14.357 "strip_size_kb": 0, 00:15:14.357 "state": "online", 00:15:14.357 "raid_level": "raid1", 00:15:14.357 "superblock": true, 00:15:14.357 "num_base_bdevs": 4, 00:15:14.357 "num_base_bdevs_discovered": 2, 00:15:14.357 "num_base_bdevs_operational": 2, 00:15:14.357 "base_bdevs_list": [ 00:15:14.357 { 00:15:14.357 "name": null, 00:15:14.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.357 "is_configured": false, 00:15:14.357 "data_offset": 0, 00:15:14.357 "data_size": 63488 00:15:14.357 }, 00:15:14.357 { 00:15:14.357 "name": null, 00:15:14.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.357 "is_configured": false, 00:15:14.357 "data_offset": 2048, 00:15:14.357 "data_size": 63488 00:15:14.357 }, 00:15:14.357 { 00:15:14.357 "name": "BaseBdev3", 00:15:14.357 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:14.357 "is_configured": true, 00:15:14.357 "data_offset": 2048, 00:15:14.357 "data_size": 63488 00:15:14.357 }, 00:15:14.357 { 00:15:14.357 "name": "BaseBdev4", 00:15:14.357 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:14.357 "is_configured": true, 00:15:14.357 "data_offset": 2048, 00:15:14.357 "data_size": 63488 00:15:14.357 } 00:15:14.357 ] 00:15:14.357 }' 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.357 [2024-10-21 09:59:50.832191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:14.357 [2024-10-21 09:59:50.832271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.357 [2024-10-21 09:59:50.832302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:14.357 [2024-10-21 09:59:50.832312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.357 [2024-10-21 09:59:50.832892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.357 [2024-10-21 09:59:50.832911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.357 [2024-10-21 09:59:50.833023] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:14.357 [2024-10-21 09:59:50.833042] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:14.357 [2024-10-21 09:59:50.833053] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:14.357 [2024-10-21 09:59:50.833065] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:14.357 BaseBdev1 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.357 09:59:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.294 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.552 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.552 "name": "raid_bdev1", 00:15:15.552 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:15.552 "strip_size_kb": 0, 00:15:15.552 "state": "online", 00:15:15.552 "raid_level": "raid1", 00:15:15.552 "superblock": true, 00:15:15.552 "num_base_bdevs": 4, 00:15:15.552 "num_base_bdevs_discovered": 2, 00:15:15.552 "num_base_bdevs_operational": 2, 00:15:15.552 "base_bdevs_list": [ 00:15:15.552 { 00:15:15.552 "name": null, 00:15:15.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.552 "is_configured": false, 00:15:15.552 "data_offset": 0, 00:15:15.552 "data_size": 63488 00:15:15.552 }, 00:15:15.552 { 00:15:15.552 "name": null, 00:15:15.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.552 "is_configured": false, 00:15:15.552 "data_offset": 2048, 00:15:15.552 "data_size": 63488 00:15:15.552 }, 00:15:15.552 { 00:15:15.552 "name": "BaseBdev3", 00:15:15.552 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:15.552 "is_configured": true, 00:15:15.552 "data_offset": 2048, 00:15:15.552 "data_size": 63488 00:15:15.552 }, 00:15:15.552 { 00:15:15.552 "name": "BaseBdev4", 00:15:15.552 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:15.552 "is_configured": true, 00:15:15.552 "data_offset": 2048, 00:15:15.552 "data_size": 63488 00:15:15.552 } 00:15:15.552 ] 00:15:15.552 }' 00:15:15.552 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.552 09:59:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.810 "name": "raid_bdev1", 00:15:15.810 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:15.810 "strip_size_kb": 0, 00:15:15.810 "state": "online", 00:15:15.810 "raid_level": "raid1", 00:15:15.810 "superblock": true, 00:15:15.810 "num_base_bdevs": 4, 00:15:15.810 "num_base_bdevs_discovered": 2, 00:15:15.810 "num_base_bdevs_operational": 2, 00:15:15.810 "base_bdevs_list": [ 00:15:15.810 { 00:15:15.810 "name": null, 00:15:15.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.810 "is_configured": false, 00:15:15.810 "data_offset": 0, 00:15:15.810 "data_size": 63488 00:15:15.810 }, 00:15:15.810 { 00:15:15.810 "name": null, 00:15:15.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.810 "is_configured": false, 00:15:15.810 "data_offset": 2048, 00:15:15.810 "data_size": 63488 00:15:15.810 }, 00:15:15.810 { 00:15:15.810 "name": "BaseBdev3", 00:15:15.810 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:15.810 "is_configured": true, 00:15:15.810 "data_offset": 2048, 00:15:15.810 "data_size": 63488 00:15:15.810 }, 00:15:15.810 { 00:15:15.810 "name": "BaseBdev4", 00:15:15.810 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:15.810 "is_configured": true, 00:15:15.810 "data_offset": 2048, 00:15:15.810 "data_size": 63488 00:15:15.810 } 00:15:15.810 ] 00:15:15.810 }' 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.810 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.069 [2024-10-21 09:59:52.425752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.069 [2024-10-21 09:59:52.425975] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:16.069 [2024-10-21 09:59:52.426000] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:16.069 request: 00:15:16.069 { 00:15:16.069 "base_bdev": "BaseBdev1", 00:15:16.069 "raid_bdev": "raid_bdev1", 00:15:16.069 "method": "bdev_raid_add_base_bdev", 00:15:16.069 "req_id": 1 00:15:16.069 } 00:15:16.069 Got JSON-RPC error response 00:15:16.069 response: 00:15:16.069 { 00:15:16.069 "code": -22, 00:15:16.069 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:16.069 } 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:16.069 09:59:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.007 "name": "raid_bdev1", 00:15:17.007 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:17.007 "strip_size_kb": 0, 00:15:17.007 "state": "online", 00:15:17.007 "raid_level": "raid1", 00:15:17.007 "superblock": true, 00:15:17.007 "num_base_bdevs": 4, 00:15:17.007 "num_base_bdevs_discovered": 2, 00:15:17.007 "num_base_bdevs_operational": 2, 00:15:17.007 "base_bdevs_list": [ 00:15:17.007 { 00:15:17.007 "name": null, 00:15:17.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.007 "is_configured": false, 00:15:17.007 "data_offset": 0, 00:15:17.007 "data_size": 63488 00:15:17.007 }, 00:15:17.007 { 00:15:17.007 "name": null, 00:15:17.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.007 "is_configured": false, 00:15:17.007 "data_offset": 2048, 00:15:17.007 "data_size": 63488 00:15:17.007 }, 00:15:17.007 { 00:15:17.007 "name": "BaseBdev3", 00:15:17.007 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:17.007 "is_configured": true, 00:15:17.007 "data_offset": 2048, 00:15:17.007 "data_size": 63488 00:15:17.007 }, 00:15:17.007 { 00:15:17.007 "name": "BaseBdev4", 00:15:17.007 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:17.007 "is_configured": true, 00:15:17.007 "data_offset": 2048, 00:15:17.007 "data_size": 63488 00:15:17.007 } 00:15:17.007 ] 00:15:17.007 }' 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.007 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.577 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.577 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.577 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.577 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.577 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.577 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.577 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.577 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.577 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.578 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.578 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.578 "name": "raid_bdev1", 00:15:17.578 "uuid": "d50fc346-b863-478a-89b3-51700cd732b5", 00:15:17.578 "strip_size_kb": 0, 00:15:17.578 "state": "online", 00:15:17.578 "raid_level": "raid1", 00:15:17.578 "superblock": true, 00:15:17.578 "num_base_bdevs": 4, 00:15:17.578 "num_base_bdevs_discovered": 2, 00:15:17.578 "num_base_bdevs_operational": 2, 00:15:17.578 "base_bdevs_list": [ 00:15:17.578 { 00:15:17.578 "name": null, 00:15:17.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.578 "is_configured": false, 00:15:17.578 "data_offset": 0, 00:15:17.578 "data_size": 63488 00:15:17.578 }, 00:15:17.578 { 00:15:17.578 "name": null, 00:15:17.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.578 "is_configured": false, 00:15:17.578 "data_offset": 2048, 00:15:17.578 "data_size": 63488 00:15:17.578 }, 00:15:17.578 { 00:15:17.578 "name": "BaseBdev3", 00:15:17.578 "uuid": "edbaae95-e4d1-512f-b664-2ae06c5650f2", 00:15:17.578 "is_configured": true, 00:15:17.578 "data_offset": 2048, 00:15:17.578 "data_size": 63488 00:15:17.578 }, 00:15:17.578 { 00:15:17.578 "name": "BaseBdev4", 00:15:17.578 "uuid": "ec26310a-6e93-53b9-b87d-7f029c642a26", 00:15:17.578 "is_configured": true, 00:15:17.578 "data_offset": 2048, 00:15:17.578 "data_size": 63488 00:15:17.578 } 00:15:17.578 ] 00:15:17.578 }' 00:15:17.578 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.578 09:59:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78797 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 78797 ']' 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 78797 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78797 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:17.578 killing process with pid 78797 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78797' 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 78797 00:15:17.578 Received shutdown signal, test time was about 18.052377 seconds 00:15:17.578 00:15:17.578 Latency(us) 00:15:17.578 [2024-10-21T09:59:54.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.578 [2024-10-21T09:59:54.173Z] =================================================================================================================== 00:15:17.578 [2024-10-21T09:59:54.173Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:17.578 [2024-10-21 09:59:54.091153] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.578 09:59:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 78797 00:15:17.578 [2024-10-21 09:59:54.091329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.578 [2024-10-21 09:59:54.091408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.578 [2024-10-21 09:59:54.091429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state offline 00:15:18.146 [2024-10-21 09:59:54.555823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:19.546 09:59:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:19.546 00:15:19.546 real 0m21.708s 00:15:19.546 user 0m28.106s 00:15:19.546 sys 0m2.901s 00:15:19.546 09:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.546 ************************************ 00:15:19.546 END TEST raid_rebuild_test_sb_io 00:15:19.546 ************************************ 00:15:19.546 09:59:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.546 09:59:55 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:19.546 09:59:55 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:19.546 09:59:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:19.546 09:59:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:19.546 09:59:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.546 ************************************ 00:15:19.546 START TEST raid5f_state_function_test 00:15:19.546 ************************************ 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79526 00:15:19.546 Process raid pid: 79526 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79526' 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79526 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 79526 ']' 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:19.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:19.546 09:59:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.546 [2024-10-21 09:59:56.034145] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:15:19.546 [2024-10-21 09:59:56.034277] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.805 [2024-10-21 09:59:56.203084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.805 [2024-10-21 09:59:56.349558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.063 [2024-10-21 09:59:56.617637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.063 [2024-10-21 09:59:56.617705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.323 09:59:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.323 09:59:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.324 [2024-10-21 09:59:56.886837] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.324 [2024-10-21 09:59:56.886910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.324 [2024-10-21 09:59:56.886921] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.324 [2024-10-21 09:59:56.886931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.324 [2024-10-21 09:59:56.886938] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:20.324 [2024-10-21 09:59:56.886948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.324 09:59:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.582 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.582 "name": "Existed_Raid", 00:15:20.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.582 "strip_size_kb": 64, 00:15:20.582 "state": "configuring", 00:15:20.582 "raid_level": "raid5f", 00:15:20.582 "superblock": false, 00:15:20.582 "num_base_bdevs": 3, 00:15:20.582 "num_base_bdevs_discovered": 0, 00:15:20.582 "num_base_bdevs_operational": 3, 00:15:20.582 "base_bdevs_list": [ 00:15:20.582 { 00:15:20.582 "name": "BaseBdev1", 00:15:20.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.582 "is_configured": false, 00:15:20.582 "data_offset": 0, 00:15:20.582 "data_size": 0 00:15:20.582 }, 00:15:20.582 { 00:15:20.582 "name": "BaseBdev2", 00:15:20.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.582 "is_configured": false, 00:15:20.582 "data_offset": 0, 00:15:20.582 "data_size": 0 00:15:20.582 }, 00:15:20.582 { 00:15:20.582 "name": "BaseBdev3", 00:15:20.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.582 "is_configured": false, 00:15:20.582 "data_offset": 0, 00:15:20.582 "data_size": 0 00:15:20.582 } 00:15:20.582 ] 00:15:20.582 }' 00:15:20.582 09:59:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.582 09:59:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.841 [2024-10-21 09:59:57.353984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:20.841 [2024-10-21 09:59:57.354049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.841 [2024-10-21 09:59:57.362015] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.841 [2024-10-21 09:59:57.362070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.841 [2024-10-21 09:59:57.362080] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.841 [2024-10-21 09:59:57.362090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.841 [2024-10-21 09:59:57.362096] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:20.841 [2024-10-21 09:59:57.362106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.841 [2024-10-21 09:59:57.419702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.841 BaseBdev1 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:20.841 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.842 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.101 [ 00:15:21.101 { 00:15:21.101 "name": "BaseBdev1", 00:15:21.101 "aliases": [ 00:15:21.101 "9f78fcf8-c808-4d37-8347-e0245f94c584" 00:15:21.101 ], 00:15:21.101 "product_name": "Malloc disk", 00:15:21.101 "block_size": 512, 00:15:21.101 "num_blocks": 65536, 00:15:21.101 "uuid": "9f78fcf8-c808-4d37-8347-e0245f94c584", 00:15:21.101 "assigned_rate_limits": { 00:15:21.101 "rw_ios_per_sec": 0, 00:15:21.101 "rw_mbytes_per_sec": 0, 00:15:21.101 "r_mbytes_per_sec": 0, 00:15:21.101 "w_mbytes_per_sec": 0 00:15:21.101 }, 00:15:21.101 "claimed": true, 00:15:21.101 "claim_type": "exclusive_write", 00:15:21.101 "zoned": false, 00:15:21.101 "supported_io_types": { 00:15:21.101 "read": true, 00:15:21.101 "write": true, 00:15:21.101 "unmap": true, 00:15:21.101 "flush": true, 00:15:21.101 "reset": true, 00:15:21.101 "nvme_admin": false, 00:15:21.101 "nvme_io": false, 00:15:21.101 "nvme_io_md": false, 00:15:21.101 "write_zeroes": true, 00:15:21.101 "zcopy": true, 00:15:21.101 "get_zone_info": false, 00:15:21.101 "zone_management": false, 00:15:21.101 "zone_append": false, 00:15:21.101 "compare": false, 00:15:21.101 "compare_and_write": false, 00:15:21.101 "abort": true, 00:15:21.101 "seek_hole": false, 00:15:21.101 "seek_data": false, 00:15:21.101 "copy": true, 00:15:21.101 "nvme_iov_md": false 00:15:21.101 }, 00:15:21.101 "memory_domains": [ 00:15:21.101 { 00:15:21.101 "dma_device_id": "system", 00:15:21.101 "dma_device_type": 1 00:15:21.101 }, 00:15:21.101 { 00:15:21.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.101 "dma_device_type": 2 00:15:21.101 } 00:15:21.101 ], 00:15:21.101 "driver_specific": {} 00:15:21.101 } 00:15:21.101 ] 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.101 "name": "Existed_Raid", 00:15:21.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.101 "strip_size_kb": 64, 00:15:21.101 "state": "configuring", 00:15:21.101 "raid_level": "raid5f", 00:15:21.101 "superblock": false, 00:15:21.101 "num_base_bdevs": 3, 00:15:21.101 "num_base_bdevs_discovered": 1, 00:15:21.101 "num_base_bdevs_operational": 3, 00:15:21.101 "base_bdevs_list": [ 00:15:21.101 { 00:15:21.101 "name": "BaseBdev1", 00:15:21.101 "uuid": "9f78fcf8-c808-4d37-8347-e0245f94c584", 00:15:21.101 "is_configured": true, 00:15:21.101 "data_offset": 0, 00:15:21.101 "data_size": 65536 00:15:21.101 }, 00:15:21.101 { 00:15:21.101 "name": "BaseBdev2", 00:15:21.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.101 "is_configured": false, 00:15:21.101 "data_offset": 0, 00:15:21.101 "data_size": 0 00:15:21.101 }, 00:15:21.101 { 00:15:21.101 "name": "BaseBdev3", 00:15:21.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.101 "is_configured": false, 00:15:21.101 "data_offset": 0, 00:15:21.101 "data_size": 0 00:15:21.101 } 00:15:21.101 ] 00:15:21.101 }' 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.101 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.360 [2024-10-21 09:59:57.914946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.360 [2024-10-21 09:59:57.915027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.360 [2024-10-21 09:59:57.927051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.360 [2024-10-21 09:59:57.929450] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.360 [2024-10-21 09:59:57.929509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.360 [2024-10-21 09:59:57.929521] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.360 [2024-10-21 09:59:57.929531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.360 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.619 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.619 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.619 "name": "Existed_Raid", 00:15:21.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.619 "strip_size_kb": 64, 00:15:21.619 "state": "configuring", 00:15:21.619 "raid_level": "raid5f", 00:15:21.619 "superblock": false, 00:15:21.619 "num_base_bdevs": 3, 00:15:21.619 "num_base_bdevs_discovered": 1, 00:15:21.619 "num_base_bdevs_operational": 3, 00:15:21.619 "base_bdevs_list": [ 00:15:21.619 { 00:15:21.619 "name": "BaseBdev1", 00:15:21.619 "uuid": "9f78fcf8-c808-4d37-8347-e0245f94c584", 00:15:21.619 "is_configured": true, 00:15:21.619 "data_offset": 0, 00:15:21.619 "data_size": 65536 00:15:21.619 }, 00:15:21.619 { 00:15:21.619 "name": "BaseBdev2", 00:15:21.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.619 "is_configured": false, 00:15:21.619 "data_offset": 0, 00:15:21.619 "data_size": 0 00:15:21.619 }, 00:15:21.619 { 00:15:21.619 "name": "BaseBdev3", 00:15:21.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.619 "is_configured": false, 00:15:21.619 "data_offset": 0, 00:15:21.619 "data_size": 0 00:15:21.619 } 00:15:21.619 ] 00:15:21.619 }' 00:15:21.619 09:59:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.619 09:59:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.879 [2024-10-21 09:59:58.461036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.879 BaseBdev2 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.879 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.138 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.138 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.138 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.138 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.138 [ 00:15:22.138 { 00:15:22.138 "name": "BaseBdev2", 00:15:22.138 "aliases": [ 00:15:22.138 "e63cdd90-647a-4ae2-aac7-524fd7f68f6d" 00:15:22.138 ], 00:15:22.138 "product_name": "Malloc disk", 00:15:22.138 "block_size": 512, 00:15:22.138 "num_blocks": 65536, 00:15:22.138 "uuid": "e63cdd90-647a-4ae2-aac7-524fd7f68f6d", 00:15:22.138 "assigned_rate_limits": { 00:15:22.138 "rw_ios_per_sec": 0, 00:15:22.138 "rw_mbytes_per_sec": 0, 00:15:22.138 "r_mbytes_per_sec": 0, 00:15:22.138 "w_mbytes_per_sec": 0 00:15:22.139 }, 00:15:22.139 "claimed": true, 00:15:22.139 "claim_type": "exclusive_write", 00:15:22.139 "zoned": false, 00:15:22.139 "supported_io_types": { 00:15:22.139 "read": true, 00:15:22.139 "write": true, 00:15:22.139 "unmap": true, 00:15:22.139 "flush": true, 00:15:22.139 "reset": true, 00:15:22.139 "nvme_admin": false, 00:15:22.139 "nvme_io": false, 00:15:22.139 "nvme_io_md": false, 00:15:22.139 "write_zeroes": true, 00:15:22.139 "zcopy": true, 00:15:22.139 "get_zone_info": false, 00:15:22.139 "zone_management": false, 00:15:22.139 "zone_append": false, 00:15:22.139 "compare": false, 00:15:22.139 "compare_and_write": false, 00:15:22.139 "abort": true, 00:15:22.139 "seek_hole": false, 00:15:22.139 "seek_data": false, 00:15:22.139 "copy": true, 00:15:22.139 "nvme_iov_md": false 00:15:22.139 }, 00:15:22.139 "memory_domains": [ 00:15:22.139 { 00:15:22.139 "dma_device_id": "system", 00:15:22.139 "dma_device_type": 1 00:15:22.139 }, 00:15:22.139 { 00:15:22.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.139 "dma_device_type": 2 00:15:22.139 } 00:15:22.139 ], 00:15:22.139 "driver_specific": {} 00:15:22.139 } 00:15:22.139 ] 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.139 "name": "Existed_Raid", 00:15:22.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.139 "strip_size_kb": 64, 00:15:22.139 "state": "configuring", 00:15:22.139 "raid_level": "raid5f", 00:15:22.139 "superblock": false, 00:15:22.139 "num_base_bdevs": 3, 00:15:22.139 "num_base_bdevs_discovered": 2, 00:15:22.139 "num_base_bdevs_operational": 3, 00:15:22.139 "base_bdevs_list": [ 00:15:22.139 { 00:15:22.139 "name": "BaseBdev1", 00:15:22.139 "uuid": "9f78fcf8-c808-4d37-8347-e0245f94c584", 00:15:22.139 "is_configured": true, 00:15:22.139 "data_offset": 0, 00:15:22.139 "data_size": 65536 00:15:22.139 }, 00:15:22.139 { 00:15:22.139 "name": "BaseBdev2", 00:15:22.139 "uuid": "e63cdd90-647a-4ae2-aac7-524fd7f68f6d", 00:15:22.139 "is_configured": true, 00:15:22.139 "data_offset": 0, 00:15:22.139 "data_size": 65536 00:15:22.139 }, 00:15:22.139 { 00:15:22.139 "name": "BaseBdev3", 00:15:22.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.139 "is_configured": false, 00:15:22.139 "data_offset": 0, 00:15:22.139 "data_size": 0 00:15:22.139 } 00:15:22.139 ] 00:15:22.139 }' 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.139 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.397 [2024-10-21 09:59:58.940439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.397 [2024-10-21 09:59:58.940516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:22.397 [2024-10-21 09:59:58.940531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:22.397 [2024-10-21 09:59:58.940840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:22.397 [2024-10-21 09:59:58.946928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:22.397 [2024-10-21 09:59:58.946955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:15:22.397 [2024-10-21 09:59:58.947244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.397 BaseBdev3 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.397 [ 00:15:22.397 { 00:15:22.397 "name": "BaseBdev3", 00:15:22.397 "aliases": [ 00:15:22.397 "475ccea0-1718-4ab4-900d-63ca755a2722" 00:15:22.397 ], 00:15:22.397 "product_name": "Malloc disk", 00:15:22.397 "block_size": 512, 00:15:22.397 "num_blocks": 65536, 00:15:22.397 "uuid": "475ccea0-1718-4ab4-900d-63ca755a2722", 00:15:22.397 "assigned_rate_limits": { 00:15:22.397 "rw_ios_per_sec": 0, 00:15:22.397 "rw_mbytes_per_sec": 0, 00:15:22.397 "r_mbytes_per_sec": 0, 00:15:22.397 "w_mbytes_per_sec": 0 00:15:22.397 }, 00:15:22.397 "claimed": true, 00:15:22.397 "claim_type": "exclusive_write", 00:15:22.397 "zoned": false, 00:15:22.397 "supported_io_types": { 00:15:22.397 "read": true, 00:15:22.397 "write": true, 00:15:22.397 "unmap": true, 00:15:22.397 "flush": true, 00:15:22.397 "reset": true, 00:15:22.397 "nvme_admin": false, 00:15:22.397 "nvme_io": false, 00:15:22.397 "nvme_io_md": false, 00:15:22.397 "write_zeroes": true, 00:15:22.397 "zcopy": true, 00:15:22.397 "get_zone_info": false, 00:15:22.397 "zone_management": false, 00:15:22.397 "zone_append": false, 00:15:22.397 "compare": false, 00:15:22.397 "compare_and_write": false, 00:15:22.397 "abort": true, 00:15:22.397 "seek_hole": false, 00:15:22.397 "seek_data": false, 00:15:22.397 "copy": true, 00:15:22.397 "nvme_iov_md": false 00:15:22.397 }, 00:15:22.397 "memory_domains": [ 00:15:22.397 { 00:15:22.397 "dma_device_id": "system", 00:15:22.397 "dma_device_type": 1 00:15:22.397 }, 00:15:22.397 { 00:15:22.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.397 "dma_device_type": 2 00:15:22.397 } 00:15:22.397 ], 00:15:22.397 "driver_specific": {} 00:15:22.397 } 00:15:22.397 ] 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.397 09:59:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.655 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.655 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.655 "name": "Existed_Raid", 00:15:22.656 "uuid": "51d59329-ad3a-478c-b34b-cdfe1fa60b3b", 00:15:22.656 "strip_size_kb": 64, 00:15:22.656 "state": "online", 00:15:22.656 "raid_level": "raid5f", 00:15:22.656 "superblock": false, 00:15:22.656 "num_base_bdevs": 3, 00:15:22.656 "num_base_bdevs_discovered": 3, 00:15:22.656 "num_base_bdevs_operational": 3, 00:15:22.656 "base_bdevs_list": [ 00:15:22.656 { 00:15:22.656 "name": "BaseBdev1", 00:15:22.656 "uuid": "9f78fcf8-c808-4d37-8347-e0245f94c584", 00:15:22.656 "is_configured": true, 00:15:22.656 "data_offset": 0, 00:15:22.656 "data_size": 65536 00:15:22.656 }, 00:15:22.656 { 00:15:22.656 "name": "BaseBdev2", 00:15:22.656 "uuid": "e63cdd90-647a-4ae2-aac7-524fd7f68f6d", 00:15:22.656 "is_configured": true, 00:15:22.656 "data_offset": 0, 00:15:22.656 "data_size": 65536 00:15:22.656 }, 00:15:22.656 { 00:15:22.656 "name": "BaseBdev3", 00:15:22.656 "uuid": "475ccea0-1718-4ab4-900d-63ca755a2722", 00:15:22.656 "is_configured": true, 00:15:22.656 "data_offset": 0, 00:15:22.656 "data_size": 65536 00:15:22.656 } 00:15:22.656 ] 00:15:22.656 }' 00:15:22.656 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.656 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.915 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:22.915 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:22.915 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:22.915 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:22.915 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:22.915 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:22.915 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:22.915 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:22.915 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.915 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.915 [2024-10-21 09:59:59.473133] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.916 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.175 "name": "Existed_Raid", 00:15:23.175 "aliases": [ 00:15:23.175 "51d59329-ad3a-478c-b34b-cdfe1fa60b3b" 00:15:23.175 ], 00:15:23.175 "product_name": "Raid Volume", 00:15:23.175 "block_size": 512, 00:15:23.175 "num_blocks": 131072, 00:15:23.175 "uuid": "51d59329-ad3a-478c-b34b-cdfe1fa60b3b", 00:15:23.175 "assigned_rate_limits": { 00:15:23.175 "rw_ios_per_sec": 0, 00:15:23.175 "rw_mbytes_per_sec": 0, 00:15:23.175 "r_mbytes_per_sec": 0, 00:15:23.175 "w_mbytes_per_sec": 0 00:15:23.175 }, 00:15:23.175 "claimed": false, 00:15:23.175 "zoned": false, 00:15:23.175 "supported_io_types": { 00:15:23.175 "read": true, 00:15:23.175 "write": true, 00:15:23.175 "unmap": false, 00:15:23.175 "flush": false, 00:15:23.175 "reset": true, 00:15:23.175 "nvme_admin": false, 00:15:23.175 "nvme_io": false, 00:15:23.175 "nvme_io_md": false, 00:15:23.175 "write_zeroes": true, 00:15:23.175 "zcopy": false, 00:15:23.175 "get_zone_info": false, 00:15:23.175 "zone_management": false, 00:15:23.175 "zone_append": false, 00:15:23.175 "compare": false, 00:15:23.175 "compare_and_write": false, 00:15:23.175 "abort": false, 00:15:23.175 "seek_hole": false, 00:15:23.175 "seek_data": false, 00:15:23.175 "copy": false, 00:15:23.175 "nvme_iov_md": false 00:15:23.175 }, 00:15:23.175 "driver_specific": { 00:15:23.175 "raid": { 00:15:23.175 "uuid": "51d59329-ad3a-478c-b34b-cdfe1fa60b3b", 00:15:23.175 "strip_size_kb": 64, 00:15:23.175 "state": "online", 00:15:23.175 "raid_level": "raid5f", 00:15:23.175 "superblock": false, 00:15:23.175 "num_base_bdevs": 3, 00:15:23.175 "num_base_bdevs_discovered": 3, 00:15:23.175 "num_base_bdevs_operational": 3, 00:15:23.175 "base_bdevs_list": [ 00:15:23.175 { 00:15:23.175 "name": "BaseBdev1", 00:15:23.175 "uuid": "9f78fcf8-c808-4d37-8347-e0245f94c584", 00:15:23.175 "is_configured": true, 00:15:23.175 "data_offset": 0, 00:15:23.175 "data_size": 65536 00:15:23.175 }, 00:15:23.175 { 00:15:23.175 "name": "BaseBdev2", 00:15:23.175 "uuid": "e63cdd90-647a-4ae2-aac7-524fd7f68f6d", 00:15:23.175 "is_configured": true, 00:15:23.175 "data_offset": 0, 00:15:23.175 "data_size": 65536 00:15:23.175 }, 00:15:23.175 { 00:15:23.175 "name": "BaseBdev3", 00:15:23.175 "uuid": "475ccea0-1718-4ab4-900d-63ca755a2722", 00:15:23.175 "is_configured": true, 00:15:23.175 "data_offset": 0, 00:15:23.175 "data_size": 65536 00:15:23.175 } 00:15:23.175 ] 00:15:23.175 } 00:15:23.175 } 00:15:23.175 }' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:23.175 BaseBdev2 00:15:23.175 BaseBdev3' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.175 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.175 [2024-10-21 09:59:59.732487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.434 "name": "Existed_Raid", 00:15:23.434 "uuid": "51d59329-ad3a-478c-b34b-cdfe1fa60b3b", 00:15:23.434 "strip_size_kb": 64, 00:15:23.434 "state": "online", 00:15:23.434 "raid_level": "raid5f", 00:15:23.434 "superblock": false, 00:15:23.434 "num_base_bdevs": 3, 00:15:23.434 "num_base_bdevs_discovered": 2, 00:15:23.434 "num_base_bdevs_operational": 2, 00:15:23.434 "base_bdevs_list": [ 00:15:23.434 { 00:15:23.434 "name": null, 00:15:23.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.434 "is_configured": false, 00:15:23.434 "data_offset": 0, 00:15:23.434 "data_size": 65536 00:15:23.434 }, 00:15:23.434 { 00:15:23.434 "name": "BaseBdev2", 00:15:23.434 "uuid": "e63cdd90-647a-4ae2-aac7-524fd7f68f6d", 00:15:23.434 "is_configured": true, 00:15:23.434 "data_offset": 0, 00:15:23.434 "data_size": 65536 00:15:23.434 }, 00:15:23.434 { 00:15:23.434 "name": "BaseBdev3", 00:15:23.434 "uuid": "475ccea0-1718-4ab4-900d-63ca755a2722", 00:15:23.434 "is_configured": true, 00:15:23.434 "data_offset": 0, 00:15:23.434 "data_size": 65536 00:15:23.434 } 00:15:23.434 ] 00:15:23.434 }' 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.434 09:59:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.001 [2024-10-21 10:00:00.356309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.001 [2024-10-21 10:00:00.356432] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.001 [2024-10-21 10:00:00.457837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.001 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.001 [2024-10-21 10:00:00.513784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.001 [2024-10-21 10:00:00.513843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.293 BaseBdev2 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.293 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.293 [ 00:15:24.293 { 00:15:24.293 "name": "BaseBdev2", 00:15:24.293 "aliases": [ 00:15:24.293 "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61" 00:15:24.293 ], 00:15:24.293 "product_name": "Malloc disk", 00:15:24.293 "block_size": 512, 00:15:24.293 "num_blocks": 65536, 00:15:24.293 "uuid": "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61", 00:15:24.293 "assigned_rate_limits": { 00:15:24.293 "rw_ios_per_sec": 0, 00:15:24.293 "rw_mbytes_per_sec": 0, 00:15:24.293 "r_mbytes_per_sec": 0, 00:15:24.293 "w_mbytes_per_sec": 0 00:15:24.293 }, 00:15:24.293 "claimed": false, 00:15:24.293 "zoned": false, 00:15:24.293 "supported_io_types": { 00:15:24.293 "read": true, 00:15:24.293 "write": true, 00:15:24.293 "unmap": true, 00:15:24.293 "flush": true, 00:15:24.293 "reset": true, 00:15:24.293 "nvme_admin": false, 00:15:24.293 "nvme_io": false, 00:15:24.293 "nvme_io_md": false, 00:15:24.293 "write_zeroes": true, 00:15:24.293 "zcopy": true, 00:15:24.293 "get_zone_info": false, 00:15:24.293 "zone_management": false, 00:15:24.293 "zone_append": false, 00:15:24.293 "compare": false, 00:15:24.293 "compare_and_write": false, 00:15:24.294 "abort": true, 00:15:24.294 "seek_hole": false, 00:15:24.294 "seek_data": false, 00:15:24.294 "copy": true, 00:15:24.294 "nvme_iov_md": false 00:15:24.294 }, 00:15:24.294 "memory_domains": [ 00:15:24.294 { 00:15:24.294 "dma_device_id": "system", 00:15:24.294 "dma_device_type": 1 00:15:24.294 }, 00:15:24.294 { 00:15:24.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.294 "dma_device_type": 2 00:15:24.294 } 00:15:24.294 ], 00:15:24.294 "driver_specific": {} 00:15:24.294 } 00:15:24.294 ] 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.294 BaseBdev3 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.294 [ 00:15:24.294 { 00:15:24.294 "name": "BaseBdev3", 00:15:24.294 "aliases": [ 00:15:24.294 "63b2b723-2cfd-4fcc-b51d-494058cd3582" 00:15:24.294 ], 00:15:24.294 "product_name": "Malloc disk", 00:15:24.294 "block_size": 512, 00:15:24.294 "num_blocks": 65536, 00:15:24.294 "uuid": "63b2b723-2cfd-4fcc-b51d-494058cd3582", 00:15:24.294 "assigned_rate_limits": { 00:15:24.294 "rw_ios_per_sec": 0, 00:15:24.294 "rw_mbytes_per_sec": 0, 00:15:24.294 "r_mbytes_per_sec": 0, 00:15:24.294 "w_mbytes_per_sec": 0 00:15:24.294 }, 00:15:24.294 "claimed": false, 00:15:24.294 "zoned": false, 00:15:24.294 "supported_io_types": { 00:15:24.294 "read": true, 00:15:24.294 "write": true, 00:15:24.294 "unmap": true, 00:15:24.294 "flush": true, 00:15:24.294 "reset": true, 00:15:24.294 "nvme_admin": false, 00:15:24.294 "nvme_io": false, 00:15:24.294 "nvme_io_md": false, 00:15:24.294 "write_zeroes": true, 00:15:24.294 "zcopy": true, 00:15:24.294 "get_zone_info": false, 00:15:24.294 "zone_management": false, 00:15:24.294 "zone_append": false, 00:15:24.294 "compare": false, 00:15:24.294 "compare_and_write": false, 00:15:24.294 "abort": true, 00:15:24.294 "seek_hole": false, 00:15:24.294 "seek_data": false, 00:15:24.294 "copy": true, 00:15:24.294 "nvme_iov_md": false 00:15:24.294 }, 00:15:24.294 "memory_domains": [ 00:15:24.294 { 00:15:24.294 "dma_device_id": "system", 00:15:24.294 "dma_device_type": 1 00:15:24.294 }, 00:15:24.294 { 00:15:24.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.294 "dma_device_type": 2 00:15:24.294 } 00:15:24.294 ], 00:15:24.294 "driver_specific": {} 00:15:24.294 } 00:15:24.294 ] 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.294 [2024-10-21 10:00:00.840309] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.294 [2024-10-21 10:00:00.840355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.294 [2024-10-21 10:00:00.840377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.294 [2024-10-21 10:00:00.842414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.294 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.559 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.559 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.559 "name": "Existed_Raid", 00:15:24.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.559 "strip_size_kb": 64, 00:15:24.559 "state": "configuring", 00:15:24.559 "raid_level": "raid5f", 00:15:24.559 "superblock": false, 00:15:24.559 "num_base_bdevs": 3, 00:15:24.559 "num_base_bdevs_discovered": 2, 00:15:24.559 "num_base_bdevs_operational": 3, 00:15:24.559 "base_bdevs_list": [ 00:15:24.559 { 00:15:24.559 "name": "BaseBdev1", 00:15:24.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.559 "is_configured": false, 00:15:24.559 "data_offset": 0, 00:15:24.559 "data_size": 0 00:15:24.559 }, 00:15:24.559 { 00:15:24.559 "name": "BaseBdev2", 00:15:24.559 "uuid": "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61", 00:15:24.559 "is_configured": true, 00:15:24.559 "data_offset": 0, 00:15:24.559 "data_size": 65536 00:15:24.559 }, 00:15:24.559 { 00:15:24.559 "name": "BaseBdev3", 00:15:24.559 "uuid": "63b2b723-2cfd-4fcc-b51d-494058cd3582", 00:15:24.559 "is_configured": true, 00:15:24.559 "data_offset": 0, 00:15:24.559 "data_size": 65536 00:15:24.559 } 00:15:24.559 ] 00:15:24.559 }' 00:15:24.559 10:00:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.559 10:00:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.818 [2024-10-21 10:00:01.307543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.818 "name": "Existed_Raid", 00:15:24.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.818 "strip_size_kb": 64, 00:15:24.818 "state": "configuring", 00:15:24.818 "raid_level": "raid5f", 00:15:24.818 "superblock": false, 00:15:24.818 "num_base_bdevs": 3, 00:15:24.818 "num_base_bdevs_discovered": 1, 00:15:24.818 "num_base_bdevs_operational": 3, 00:15:24.818 "base_bdevs_list": [ 00:15:24.818 { 00:15:24.818 "name": "BaseBdev1", 00:15:24.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.818 "is_configured": false, 00:15:24.818 "data_offset": 0, 00:15:24.818 "data_size": 0 00:15:24.818 }, 00:15:24.818 { 00:15:24.818 "name": null, 00:15:24.818 "uuid": "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61", 00:15:24.818 "is_configured": false, 00:15:24.818 "data_offset": 0, 00:15:24.818 "data_size": 65536 00:15:24.818 }, 00:15:24.818 { 00:15:24.818 "name": "BaseBdev3", 00:15:24.818 "uuid": "63b2b723-2cfd-4fcc-b51d-494058cd3582", 00:15:24.818 "is_configured": true, 00:15:24.818 "data_offset": 0, 00:15:24.818 "data_size": 65536 00:15:24.818 } 00:15:24.818 ] 00:15:24.818 }' 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.818 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.386 [2024-10-21 10:00:01.846757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.386 BaseBdev1 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.386 [ 00:15:25.386 { 00:15:25.386 "name": "BaseBdev1", 00:15:25.386 "aliases": [ 00:15:25.386 "be4e4a30-cd3f-43b6-baf3-f52e129cc012" 00:15:25.386 ], 00:15:25.386 "product_name": "Malloc disk", 00:15:25.386 "block_size": 512, 00:15:25.386 "num_blocks": 65536, 00:15:25.386 "uuid": "be4e4a30-cd3f-43b6-baf3-f52e129cc012", 00:15:25.386 "assigned_rate_limits": { 00:15:25.386 "rw_ios_per_sec": 0, 00:15:25.386 "rw_mbytes_per_sec": 0, 00:15:25.386 "r_mbytes_per_sec": 0, 00:15:25.386 "w_mbytes_per_sec": 0 00:15:25.386 }, 00:15:25.386 "claimed": true, 00:15:25.386 "claim_type": "exclusive_write", 00:15:25.386 "zoned": false, 00:15:25.386 "supported_io_types": { 00:15:25.386 "read": true, 00:15:25.386 "write": true, 00:15:25.386 "unmap": true, 00:15:25.386 "flush": true, 00:15:25.386 "reset": true, 00:15:25.386 "nvme_admin": false, 00:15:25.386 "nvme_io": false, 00:15:25.386 "nvme_io_md": false, 00:15:25.386 "write_zeroes": true, 00:15:25.386 "zcopy": true, 00:15:25.386 "get_zone_info": false, 00:15:25.386 "zone_management": false, 00:15:25.386 "zone_append": false, 00:15:25.386 "compare": false, 00:15:25.386 "compare_and_write": false, 00:15:25.386 "abort": true, 00:15:25.386 "seek_hole": false, 00:15:25.386 "seek_data": false, 00:15:25.386 "copy": true, 00:15:25.386 "nvme_iov_md": false 00:15:25.386 }, 00:15:25.386 "memory_domains": [ 00:15:25.386 { 00:15:25.386 "dma_device_id": "system", 00:15:25.386 "dma_device_type": 1 00:15:25.386 }, 00:15:25.386 { 00:15:25.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.386 "dma_device_type": 2 00:15:25.386 } 00:15:25.386 ], 00:15:25.386 "driver_specific": {} 00:15:25.386 } 00:15:25.386 ] 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.386 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.386 "name": "Existed_Raid", 00:15:25.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.386 "strip_size_kb": 64, 00:15:25.386 "state": "configuring", 00:15:25.386 "raid_level": "raid5f", 00:15:25.386 "superblock": false, 00:15:25.386 "num_base_bdevs": 3, 00:15:25.386 "num_base_bdevs_discovered": 2, 00:15:25.386 "num_base_bdevs_operational": 3, 00:15:25.386 "base_bdevs_list": [ 00:15:25.386 { 00:15:25.386 "name": "BaseBdev1", 00:15:25.386 "uuid": "be4e4a30-cd3f-43b6-baf3-f52e129cc012", 00:15:25.386 "is_configured": true, 00:15:25.386 "data_offset": 0, 00:15:25.386 "data_size": 65536 00:15:25.386 }, 00:15:25.386 { 00:15:25.386 "name": null, 00:15:25.386 "uuid": "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61", 00:15:25.386 "is_configured": false, 00:15:25.386 "data_offset": 0, 00:15:25.386 "data_size": 65536 00:15:25.386 }, 00:15:25.386 { 00:15:25.386 "name": "BaseBdev3", 00:15:25.386 "uuid": "63b2b723-2cfd-4fcc-b51d-494058cd3582", 00:15:25.386 "is_configured": true, 00:15:25.386 "data_offset": 0, 00:15:25.386 "data_size": 65536 00:15:25.386 } 00:15:25.386 ] 00:15:25.386 }' 00:15:25.387 10:00:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.387 10:00:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.954 [2024-10-21 10:00:02.358003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.954 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.954 "name": "Existed_Raid", 00:15:25.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.954 "strip_size_kb": 64, 00:15:25.954 "state": "configuring", 00:15:25.954 "raid_level": "raid5f", 00:15:25.954 "superblock": false, 00:15:25.954 "num_base_bdevs": 3, 00:15:25.954 "num_base_bdevs_discovered": 1, 00:15:25.954 "num_base_bdevs_operational": 3, 00:15:25.954 "base_bdevs_list": [ 00:15:25.954 { 00:15:25.954 "name": "BaseBdev1", 00:15:25.954 "uuid": "be4e4a30-cd3f-43b6-baf3-f52e129cc012", 00:15:25.954 "is_configured": true, 00:15:25.954 "data_offset": 0, 00:15:25.955 "data_size": 65536 00:15:25.955 }, 00:15:25.955 { 00:15:25.955 "name": null, 00:15:25.955 "uuid": "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61", 00:15:25.955 "is_configured": false, 00:15:25.955 "data_offset": 0, 00:15:25.955 "data_size": 65536 00:15:25.955 }, 00:15:25.955 { 00:15:25.955 "name": null, 00:15:25.955 "uuid": "63b2b723-2cfd-4fcc-b51d-494058cd3582", 00:15:25.955 "is_configured": false, 00:15:25.955 "data_offset": 0, 00:15:25.955 "data_size": 65536 00:15:25.955 } 00:15:25.955 ] 00:15:25.955 }' 00:15:25.955 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.955 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.523 [2024-10-21 10:00:02.865138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.523 "name": "Existed_Raid", 00:15:26.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.523 "strip_size_kb": 64, 00:15:26.523 "state": "configuring", 00:15:26.523 "raid_level": "raid5f", 00:15:26.523 "superblock": false, 00:15:26.523 "num_base_bdevs": 3, 00:15:26.523 "num_base_bdevs_discovered": 2, 00:15:26.523 "num_base_bdevs_operational": 3, 00:15:26.523 "base_bdevs_list": [ 00:15:26.523 { 00:15:26.523 "name": "BaseBdev1", 00:15:26.523 "uuid": "be4e4a30-cd3f-43b6-baf3-f52e129cc012", 00:15:26.523 "is_configured": true, 00:15:26.523 "data_offset": 0, 00:15:26.523 "data_size": 65536 00:15:26.523 }, 00:15:26.523 { 00:15:26.523 "name": null, 00:15:26.523 "uuid": "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61", 00:15:26.523 "is_configured": false, 00:15:26.523 "data_offset": 0, 00:15:26.523 "data_size": 65536 00:15:26.523 }, 00:15:26.523 { 00:15:26.523 "name": "BaseBdev3", 00:15:26.523 "uuid": "63b2b723-2cfd-4fcc-b51d-494058cd3582", 00:15:26.523 "is_configured": true, 00:15:26.523 "data_offset": 0, 00:15:26.523 "data_size": 65536 00:15:26.523 } 00:15:26.523 ] 00:15:26.523 }' 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.523 10:00:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.782 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:26.782 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.782 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.782 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.782 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.782 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:26.782 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:26.782 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.782 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.782 [2024-10-21 10:00:03.332387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.041 "name": "Existed_Raid", 00:15:27.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.041 "strip_size_kb": 64, 00:15:27.041 "state": "configuring", 00:15:27.041 "raid_level": "raid5f", 00:15:27.041 "superblock": false, 00:15:27.041 "num_base_bdevs": 3, 00:15:27.041 "num_base_bdevs_discovered": 1, 00:15:27.041 "num_base_bdevs_operational": 3, 00:15:27.041 "base_bdevs_list": [ 00:15:27.041 { 00:15:27.041 "name": null, 00:15:27.041 "uuid": "be4e4a30-cd3f-43b6-baf3-f52e129cc012", 00:15:27.041 "is_configured": false, 00:15:27.041 "data_offset": 0, 00:15:27.041 "data_size": 65536 00:15:27.041 }, 00:15:27.041 { 00:15:27.041 "name": null, 00:15:27.041 "uuid": "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61", 00:15:27.041 "is_configured": false, 00:15:27.041 "data_offset": 0, 00:15:27.041 "data_size": 65536 00:15:27.041 }, 00:15:27.041 { 00:15:27.041 "name": "BaseBdev3", 00:15:27.041 "uuid": "63b2b723-2cfd-4fcc-b51d-494058cd3582", 00:15:27.041 "is_configured": true, 00:15:27.041 "data_offset": 0, 00:15:27.041 "data_size": 65536 00:15:27.041 } 00:15:27.041 ] 00:15:27.041 }' 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.041 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.300 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.300 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:27.300 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.300 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.300 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.558 [2024-10-21 10:00:03.921370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.558 "name": "Existed_Raid", 00:15:27.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.558 "strip_size_kb": 64, 00:15:27.558 "state": "configuring", 00:15:27.558 "raid_level": "raid5f", 00:15:27.558 "superblock": false, 00:15:27.558 "num_base_bdevs": 3, 00:15:27.558 "num_base_bdevs_discovered": 2, 00:15:27.558 "num_base_bdevs_operational": 3, 00:15:27.558 "base_bdevs_list": [ 00:15:27.558 { 00:15:27.558 "name": null, 00:15:27.558 "uuid": "be4e4a30-cd3f-43b6-baf3-f52e129cc012", 00:15:27.558 "is_configured": false, 00:15:27.558 "data_offset": 0, 00:15:27.558 "data_size": 65536 00:15:27.558 }, 00:15:27.558 { 00:15:27.558 "name": "BaseBdev2", 00:15:27.558 "uuid": "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61", 00:15:27.558 "is_configured": true, 00:15:27.558 "data_offset": 0, 00:15:27.558 "data_size": 65536 00:15:27.558 }, 00:15:27.558 { 00:15:27.558 "name": "BaseBdev3", 00:15:27.558 "uuid": "63b2b723-2cfd-4fcc-b51d-494058cd3582", 00:15:27.558 "is_configured": true, 00:15:27.558 "data_offset": 0, 00:15:27.558 "data_size": 65536 00:15:27.558 } 00:15:27.558 ] 00:15:27.558 }' 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.558 10:00:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.817 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.817 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.817 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.817 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:27.817 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be4e4a30-cd3f-43b6-baf3-f52e129cc012 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.077 [2024-10-21 10:00:04.520802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:28.077 [2024-10-21 10:00:04.520877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:28.077 [2024-10-21 10:00:04.520887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:28.077 [2024-10-21 10:00:04.521173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:28.077 [2024-10-21 10:00:04.526667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:28.077 [2024-10-21 10:00:04.526694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:15:28.077 [2024-10-21 10:00:04.526995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.077 NewBaseBdev 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:28.077 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.078 [ 00:15:28.078 { 00:15:28.078 "name": "NewBaseBdev", 00:15:28.078 "aliases": [ 00:15:28.078 "be4e4a30-cd3f-43b6-baf3-f52e129cc012" 00:15:28.078 ], 00:15:28.078 "product_name": "Malloc disk", 00:15:28.078 "block_size": 512, 00:15:28.078 "num_blocks": 65536, 00:15:28.078 "uuid": "be4e4a30-cd3f-43b6-baf3-f52e129cc012", 00:15:28.078 "assigned_rate_limits": { 00:15:28.078 "rw_ios_per_sec": 0, 00:15:28.078 "rw_mbytes_per_sec": 0, 00:15:28.078 "r_mbytes_per_sec": 0, 00:15:28.078 "w_mbytes_per_sec": 0 00:15:28.078 }, 00:15:28.078 "claimed": true, 00:15:28.078 "claim_type": "exclusive_write", 00:15:28.078 "zoned": false, 00:15:28.078 "supported_io_types": { 00:15:28.078 "read": true, 00:15:28.078 "write": true, 00:15:28.078 "unmap": true, 00:15:28.078 "flush": true, 00:15:28.078 "reset": true, 00:15:28.078 "nvme_admin": false, 00:15:28.078 "nvme_io": false, 00:15:28.078 "nvme_io_md": false, 00:15:28.078 "write_zeroes": true, 00:15:28.078 "zcopy": true, 00:15:28.078 "get_zone_info": false, 00:15:28.078 "zone_management": false, 00:15:28.078 "zone_append": false, 00:15:28.078 "compare": false, 00:15:28.078 "compare_and_write": false, 00:15:28.078 "abort": true, 00:15:28.078 "seek_hole": false, 00:15:28.078 "seek_data": false, 00:15:28.078 "copy": true, 00:15:28.078 "nvme_iov_md": false 00:15:28.078 }, 00:15:28.078 "memory_domains": [ 00:15:28.078 { 00:15:28.078 "dma_device_id": "system", 00:15:28.078 "dma_device_type": 1 00:15:28.078 }, 00:15:28.078 { 00:15:28.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.078 "dma_device_type": 2 00:15:28.078 } 00:15:28.078 ], 00:15:28.078 "driver_specific": {} 00:15:28.078 } 00:15:28.078 ] 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.078 "name": "Existed_Raid", 00:15:28.078 "uuid": "853c754b-3638-4de5-a838-ac946ec151e6", 00:15:28.078 "strip_size_kb": 64, 00:15:28.078 "state": "online", 00:15:28.078 "raid_level": "raid5f", 00:15:28.078 "superblock": false, 00:15:28.078 "num_base_bdevs": 3, 00:15:28.078 "num_base_bdevs_discovered": 3, 00:15:28.078 "num_base_bdevs_operational": 3, 00:15:28.078 "base_bdevs_list": [ 00:15:28.078 { 00:15:28.078 "name": "NewBaseBdev", 00:15:28.078 "uuid": "be4e4a30-cd3f-43b6-baf3-f52e129cc012", 00:15:28.078 "is_configured": true, 00:15:28.078 "data_offset": 0, 00:15:28.078 "data_size": 65536 00:15:28.078 }, 00:15:28.078 { 00:15:28.078 "name": "BaseBdev2", 00:15:28.078 "uuid": "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61", 00:15:28.078 "is_configured": true, 00:15:28.078 "data_offset": 0, 00:15:28.078 "data_size": 65536 00:15:28.078 }, 00:15:28.078 { 00:15:28.078 "name": "BaseBdev3", 00:15:28.078 "uuid": "63b2b723-2cfd-4fcc-b51d-494058cd3582", 00:15:28.078 "is_configured": true, 00:15:28.078 "data_offset": 0, 00:15:28.078 "data_size": 65536 00:15:28.078 } 00:15:28.078 ] 00:15:28.078 }' 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.078 10:00:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.647 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:28.647 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:28.647 10:00:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.647 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.647 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.647 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.647 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.647 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:28.647 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.647 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.647 [2024-10-21 10:00:05.013260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.647 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.647 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.647 "name": "Existed_Raid", 00:15:28.647 "aliases": [ 00:15:28.647 "853c754b-3638-4de5-a838-ac946ec151e6" 00:15:28.647 ], 00:15:28.647 "product_name": "Raid Volume", 00:15:28.647 "block_size": 512, 00:15:28.647 "num_blocks": 131072, 00:15:28.647 "uuid": "853c754b-3638-4de5-a838-ac946ec151e6", 00:15:28.647 "assigned_rate_limits": { 00:15:28.647 "rw_ios_per_sec": 0, 00:15:28.647 "rw_mbytes_per_sec": 0, 00:15:28.647 "r_mbytes_per_sec": 0, 00:15:28.647 "w_mbytes_per_sec": 0 00:15:28.647 }, 00:15:28.647 "claimed": false, 00:15:28.647 "zoned": false, 00:15:28.647 "supported_io_types": { 00:15:28.647 "read": true, 00:15:28.647 "write": true, 00:15:28.647 "unmap": false, 00:15:28.647 "flush": false, 00:15:28.647 "reset": true, 00:15:28.647 "nvme_admin": false, 00:15:28.647 "nvme_io": false, 00:15:28.647 "nvme_io_md": false, 00:15:28.647 "write_zeroes": true, 00:15:28.647 "zcopy": false, 00:15:28.647 "get_zone_info": false, 00:15:28.647 "zone_management": false, 00:15:28.647 "zone_append": false, 00:15:28.647 "compare": false, 00:15:28.647 "compare_and_write": false, 00:15:28.647 "abort": false, 00:15:28.647 "seek_hole": false, 00:15:28.647 "seek_data": false, 00:15:28.647 "copy": false, 00:15:28.647 "nvme_iov_md": false 00:15:28.647 }, 00:15:28.647 "driver_specific": { 00:15:28.647 "raid": { 00:15:28.647 "uuid": "853c754b-3638-4de5-a838-ac946ec151e6", 00:15:28.647 "strip_size_kb": 64, 00:15:28.648 "state": "online", 00:15:28.648 "raid_level": "raid5f", 00:15:28.648 "superblock": false, 00:15:28.648 "num_base_bdevs": 3, 00:15:28.648 "num_base_bdevs_discovered": 3, 00:15:28.648 "num_base_bdevs_operational": 3, 00:15:28.648 "base_bdevs_list": [ 00:15:28.648 { 00:15:28.648 "name": "NewBaseBdev", 00:15:28.648 "uuid": "be4e4a30-cd3f-43b6-baf3-f52e129cc012", 00:15:28.648 "is_configured": true, 00:15:28.648 "data_offset": 0, 00:15:28.648 "data_size": 65536 00:15:28.648 }, 00:15:28.648 { 00:15:28.648 "name": "BaseBdev2", 00:15:28.648 "uuid": "9cb14158-8bf7-4b9d-8aab-81aa43f7fc61", 00:15:28.648 "is_configured": true, 00:15:28.648 "data_offset": 0, 00:15:28.648 "data_size": 65536 00:15:28.648 }, 00:15:28.648 { 00:15:28.648 "name": "BaseBdev3", 00:15:28.648 "uuid": "63b2b723-2cfd-4fcc-b51d-494058cd3582", 00:15:28.648 "is_configured": true, 00:15:28.648 "data_offset": 0, 00:15:28.648 "data_size": 65536 00:15:28.648 } 00:15:28.648 ] 00:15:28.648 } 00:15:28.648 } 00:15:28.648 }' 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:28.648 BaseBdev2 00:15:28.648 BaseBdev3' 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.648 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.907 [2024-10-21 10:00:05.308594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.907 [2024-10-21 10:00:05.308637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.907 [2024-10-21 10:00:05.308762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.907 [2024-10-21 10:00:05.309083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.907 [2024-10-21 10:00:05.309106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79526 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 79526 ']' 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 79526 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79526 00:15:28.907 killing process with pid 79526 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79526' 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 79526 00:15:28.907 10:00:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 79526 00:15:28.907 [2024-10-21 10:00:05.343716] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.166 [2024-10-21 10:00:05.666815] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:30.547 00:15:30.547 real 0m10.967s 00:15:30.547 user 0m17.180s 00:15:30.547 sys 0m2.169s 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.547 ************************************ 00:15:30.547 END TEST raid5f_state_function_test 00:15:30.547 ************************************ 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.547 10:00:06 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:30.547 10:00:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:30.547 10:00:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.547 10:00:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.547 ************************************ 00:15:30.547 START TEST raid5f_state_function_test_sb 00:15:30.547 ************************************ 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:30.547 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80152 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:30.548 Process raid pid: 80152 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80152' 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80152 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80152 ']' 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.548 10:00:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.548 [2024-10-21 10:00:07.066988] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:15:30.548 [2024-10-21 10:00:07.067101] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.807 [2024-10-21 10:00:07.231360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.807 [2024-10-21 10:00:07.379315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.066 [2024-10-21 10:00:07.637794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.066 [2024-10-21 10:00:07.637844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.326 [2024-10-21 10:00:07.894706] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.326 [2024-10-21 10:00:07.894760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.326 [2024-10-21 10:00:07.894770] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.326 [2024-10-21 10:00:07.894779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.326 [2024-10-21 10:00:07.894786] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.326 [2024-10-21 10:00:07.894795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.326 10:00:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.585 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.585 "name": "Existed_Raid", 00:15:31.585 "uuid": "e867244e-c0ce-41e7-a715-4aa15facb4a5", 00:15:31.585 "strip_size_kb": 64, 00:15:31.585 "state": "configuring", 00:15:31.585 "raid_level": "raid5f", 00:15:31.585 "superblock": true, 00:15:31.585 "num_base_bdevs": 3, 00:15:31.585 "num_base_bdevs_discovered": 0, 00:15:31.585 "num_base_bdevs_operational": 3, 00:15:31.585 "base_bdevs_list": [ 00:15:31.585 { 00:15:31.585 "name": "BaseBdev1", 00:15:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.585 "is_configured": false, 00:15:31.585 "data_offset": 0, 00:15:31.585 "data_size": 0 00:15:31.585 }, 00:15:31.585 { 00:15:31.585 "name": "BaseBdev2", 00:15:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.585 "is_configured": false, 00:15:31.585 "data_offset": 0, 00:15:31.585 "data_size": 0 00:15:31.585 }, 00:15:31.585 { 00:15:31.585 "name": "BaseBdev3", 00:15:31.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.585 "is_configured": false, 00:15:31.585 "data_offset": 0, 00:15:31.585 "data_size": 0 00:15:31.585 } 00:15:31.585 ] 00:15:31.585 }' 00:15:31.585 10:00:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.585 10:00:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.845 [2024-10-21 10:00:08.373815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.845 [2024-10-21 10:00:08.373865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.845 [2024-10-21 10:00:08.381827] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.845 [2024-10-21 10:00:08.381875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.845 [2024-10-21 10:00:08.381884] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.845 [2024-10-21 10:00:08.381894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.845 [2024-10-21 10:00:08.381900] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.845 [2024-10-21 10:00:08.381909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.845 [2024-10-21 10:00:08.435560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.845 BaseBdev1 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.845 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.105 [ 00:15:32.105 { 00:15:32.105 "name": "BaseBdev1", 00:15:32.105 "aliases": [ 00:15:32.105 "fa1b0588-8287-4d13-bd11-5832093881b8" 00:15:32.105 ], 00:15:32.105 "product_name": "Malloc disk", 00:15:32.105 "block_size": 512, 00:15:32.105 "num_blocks": 65536, 00:15:32.105 "uuid": "fa1b0588-8287-4d13-bd11-5832093881b8", 00:15:32.105 "assigned_rate_limits": { 00:15:32.105 "rw_ios_per_sec": 0, 00:15:32.105 "rw_mbytes_per_sec": 0, 00:15:32.105 "r_mbytes_per_sec": 0, 00:15:32.105 "w_mbytes_per_sec": 0 00:15:32.105 }, 00:15:32.105 "claimed": true, 00:15:32.105 "claim_type": "exclusive_write", 00:15:32.105 "zoned": false, 00:15:32.105 "supported_io_types": { 00:15:32.105 "read": true, 00:15:32.105 "write": true, 00:15:32.105 "unmap": true, 00:15:32.105 "flush": true, 00:15:32.105 "reset": true, 00:15:32.105 "nvme_admin": false, 00:15:32.105 "nvme_io": false, 00:15:32.105 "nvme_io_md": false, 00:15:32.105 "write_zeroes": true, 00:15:32.105 "zcopy": true, 00:15:32.105 "get_zone_info": false, 00:15:32.105 "zone_management": false, 00:15:32.105 "zone_append": false, 00:15:32.105 "compare": false, 00:15:32.105 "compare_and_write": false, 00:15:32.105 "abort": true, 00:15:32.105 "seek_hole": false, 00:15:32.105 "seek_data": false, 00:15:32.105 "copy": true, 00:15:32.105 "nvme_iov_md": false 00:15:32.105 }, 00:15:32.105 "memory_domains": [ 00:15:32.105 { 00:15:32.105 "dma_device_id": "system", 00:15:32.105 "dma_device_type": 1 00:15:32.105 }, 00:15:32.105 { 00:15:32.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.105 "dma_device_type": 2 00:15:32.105 } 00:15:32.105 ], 00:15:32.105 "driver_specific": {} 00:15:32.105 } 00:15:32.105 ] 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.105 "name": "Existed_Raid", 00:15:32.105 "uuid": "0078c6a6-5047-4128-8a2f-e2a1771299a3", 00:15:32.105 "strip_size_kb": 64, 00:15:32.105 "state": "configuring", 00:15:32.105 "raid_level": "raid5f", 00:15:32.105 "superblock": true, 00:15:32.105 "num_base_bdevs": 3, 00:15:32.105 "num_base_bdevs_discovered": 1, 00:15:32.105 "num_base_bdevs_operational": 3, 00:15:32.105 "base_bdevs_list": [ 00:15:32.105 { 00:15:32.105 "name": "BaseBdev1", 00:15:32.105 "uuid": "fa1b0588-8287-4d13-bd11-5832093881b8", 00:15:32.105 "is_configured": true, 00:15:32.105 "data_offset": 2048, 00:15:32.105 "data_size": 63488 00:15:32.105 }, 00:15:32.105 { 00:15:32.105 "name": "BaseBdev2", 00:15:32.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.105 "is_configured": false, 00:15:32.105 "data_offset": 0, 00:15:32.105 "data_size": 0 00:15:32.105 }, 00:15:32.105 { 00:15:32.105 "name": "BaseBdev3", 00:15:32.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.105 "is_configured": false, 00:15:32.105 "data_offset": 0, 00:15:32.105 "data_size": 0 00:15:32.105 } 00:15:32.105 ] 00:15:32.105 }' 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.105 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.365 [2024-10-21 10:00:08.886850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.365 [2024-10-21 10:00:08.886929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.365 [2024-10-21 10:00:08.898887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.365 [2024-10-21 10:00:08.900974] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.365 [2024-10-21 10:00:08.901015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.365 [2024-10-21 10:00:08.901025] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.365 [2024-10-21 10:00:08.901034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.365 "name": "Existed_Raid", 00:15:32.365 "uuid": "8dde81e4-1262-41a5-bb04-a36498219b41", 00:15:32.365 "strip_size_kb": 64, 00:15:32.365 "state": "configuring", 00:15:32.365 "raid_level": "raid5f", 00:15:32.365 "superblock": true, 00:15:32.365 "num_base_bdevs": 3, 00:15:32.365 "num_base_bdevs_discovered": 1, 00:15:32.365 "num_base_bdevs_operational": 3, 00:15:32.365 "base_bdevs_list": [ 00:15:32.365 { 00:15:32.365 "name": "BaseBdev1", 00:15:32.365 "uuid": "fa1b0588-8287-4d13-bd11-5832093881b8", 00:15:32.365 "is_configured": true, 00:15:32.365 "data_offset": 2048, 00:15:32.365 "data_size": 63488 00:15:32.365 }, 00:15:32.365 { 00:15:32.365 "name": "BaseBdev2", 00:15:32.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.365 "is_configured": false, 00:15:32.365 "data_offset": 0, 00:15:32.365 "data_size": 0 00:15:32.365 }, 00:15:32.365 { 00:15:32.365 "name": "BaseBdev3", 00:15:32.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.365 "is_configured": false, 00:15:32.365 "data_offset": 0, 00:15:32.365 "data_size": 0 00:15:32.365 } 00:15:32.365 ] 00:15:32.365 }' 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.365 10:00:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.935 [2024-10-21 10:00:09.418774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.935 BaseBdev2 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.935 [ 00:15:32.935 { 00:15:32.935 "name": "BaseBdev2", 00:15:32.935 "aliases": [ 00:15:32.935 "6bc7e5bb-51f1-444f-8b08-4659a4224fae" 00:15:32.935 ], 00:15:32.935 "product_name": "Malloc disk", 00:15:32.935 "block_size": 512, 00:15:32.935 "num_blocks": 65536, 00:15:32.935 "uuid": "6bc7e5bb-51f1-444f-8b08-4659a4224fae", 00:15:32.935 "assigned_rate_limits": { 00:15:32.935 "rw_ios_per_sec": 0, 00:15:32.935 "rw_mbytes_per_sec": 0, 00:15:32.935 "r_mbytes_per_sec": 0, 00:15:32.935 "w_mbytes_per_sec": 0 00:15:32.935 }, 00:15:32.935 "claimed": true, 00:15:32.935 "claim_type": "exclusive_write", 00:15:32.935 "zoned": false, 00:15:32.935 "supported_io_types": { 00:15:32.935 "read": true, 00:15:32.935 "write": true, 00:15:32.935 "unmap": true, 00:15:32.935 "flush": true, 00:15:32.935 "reset": true, 00:15:32.935 "nvme_admin": false, 00:15:32.935 "nvme_io": false, 00:15:32.935 "nvme_io_md": false, 00:15:32.935 "write_zeroes": true, 00:15:32.935 "zcopy": true, 00:15:32.935 "get_zone_info": false, 00:15:32.935 "zone_management": false, 00:15:32.935 "zone_append": false, 00:15:32.935 "compare": false, 00:15:32.935 "compare_and_write": false, 00:15:32.935 "abort": true, 00:15:32.935 "seek_hole": false, 00:15:32.935 "seek_data": false, 00:15:32.935 "copy": true, 00:15:32.935 "nvme_iov_md": false 00:15:32.935 }, 00:15:32.935 "memory_domains": [ 00:15:32.935 { 00:15:32.935 "dma_device_id": "system", 00:15:32.935 "dma_device_type": 1 00:15:32.935 }, 00:15:32.935 { 00:15:32.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.935 "dma_device_type": 2 00:15:32.935 } 00:15:32.935 ], 00:15:32.935 "driver_specific": {} 00:15:32.935 } 00:15:32.935 ] 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.935 "name": "Existed_Raid", 00:15:32.935 "uuid": "8dde81e4-1262-41a5-bb04-a36498219b41", 00:15:32.935 "strip_size_kb": 64, 00:15:32.935 "state": "configuring", 00:15:32.935 "raid_level": "raid5f", 00:15:32.935 "superblock": true, 00:15:32.935 "num_base_bdevs": 3, 00:15:32.935 "num_base_bdevs_discovered": 2, 00:15:32.935 "num_base_bdevs_operational": 3, 00:15:32.935 "base_bdevs_list": [ 00:15:32.935 { 00:15:32.935 "name": "BaseBdev1", 00:15:32.935 "uuid": "fa1b0588-8287-4d13-bd11-5832093881b8", 00:15:32.935 "is_configured": true, 00:15:32.935 "data_offset": 2048, 00:15:32.935 "data_size": 63488 00:15:32.935 }, 00:15:32.935 { 00:15:32.935 "name": "BaseBdev2", 00:15:32.935 "uuid": "6bc7e5bb-51f1-444f-8b08-4659a4224fae", 00:15:32.935 "is_configured": true, 00:15:32.935 "data_offset": 2048, 00:15:32.935 "data_size": 63488 00:15:32.935 }, 00:15:32.935 { 00:15:32.935 "name": "BaseBdev3", 00:15:32.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.935 "is_configured": false, 00:15:32.935 "data_offset": 0, 00:15:32.935 "data_size": 0 00:15:32.935 } 00:15:32.935 ] 00:15:32.935 }' 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.935 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.505 [2024-10-21 10:00:09.944906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.505 [2024-10-21 10:00:09.945235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:33.505 [2024-10-21 10:00:09.945260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:33.505 [2024-10-21 10:00:09.945578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:33.505 BaseBdev3 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.505 [2024-10-21 10:00:09.952679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:33.505 [2024-10-21 10:00:09.952701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:15:33.505 [2024-10-21 10:00:09.952884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.505 [ 00:15:33.505 { 00:15:33.505 "name": "BaseBdev3", 00:15:33.505 "aliases": [ 00:15:33.505 "74503c3c-03aa-4c14-917f-583b3214af4b" 00:15:33.505 ], 00:15:33.505 "product_name": "Malloc disk", 00:15:33.505 "block_size": 512, 00:15:33.505 "num_blocks": 65536, 00:15:33.505 "uuid": "74503c3c-03aa-4c14-917f-583b3214af4b", 00:15:33.505 "assigned_rate_limits": { 00:15:33.505 "rw_ios_per_sec": 0, 00:15:33.505 "rw_mbytes_per_sec": 0, 00:15:33.505 "r_mbytes_per_sec": 0, 00:15:33.505 "w_mbytes_per_sec": 0 00:15:33.505 }, 00:15:33.505 "claimed": true, 00:15:33.505 "claim_type": "exclusive_write", 00:15:33.505 "zoned": false, 00:15:33.505 "supported_io_types": { 00:15:33.505 "read": true, 00:15:33.505 "write": true, 00:15:33.505 "unmap": true, 00:15:33.505 "flush": true, 00:15:33.505 "reset": true, 00:15:33.505 "nvme_admin": false, 00:15:33.505 "nvme_io": false, 00:15:33.505 "nvme_io_md": false, 00:15:33.505 "write_zeroes": true, 00:15:33.505 "zcopy": true, 00:15:33.505 "get_zone_info": false, 00:15:33.505 "zone_management": false, 00:15:33.505 "zone_append": false, 00:15:33.505 "compare": false, 00:15:33.505 "compare_and_write": false, 00:15:33.505 "abort": true, 00:15:33.505 "seek_hole": false, 00:15:33.505 "seek_data": false, 00:15:33.505 "copy": true, 00:15:33.505 "nvme_iov_md": false 00:15:33.505 }, 00:15:33.505 "memory_domains": [ 00:15:33.505 { 00:15:33.505 "dma_device_id": "system", 00:15:33.505 "dma_device_type": 1 00:15:33.505 }, 00:15:33.505 { 00:15:33.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.505 "dma_device_type": 2 00:15:33.505 } 00:15:33.505 ], 00:15:33.505 "driver_specific": {} 00:15:33.505 } 00:15:33.505 ] 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.505 10:00:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.505 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.505 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.505 "name": "Existed_Raid", 00:15:33.505 "uuid": "8dde81e4-1262-41a5-bb04-a36498219b41", 00:15:33.505 "strip_size_kb": 64, 00:15:33.505 "state": "online", 00:15:33.505 "raid_level": "raid5f", 00:15:33.505 "superblock": true, 00:15:33.505 "num_base_bdevs": 3, 00:15:33.505 "num_base_bdevs_discovered": 3, 00:15:33.505 "num_base_bdevs_operational": 3, 00:15:33.505 "base_bdevs_list": [ 00:15:33.505 { 00:15:33.505 "name": "BaseBdev1", 00:15:33.505 "uuid": "fa1b0588-8287-4d13-bd11-5832093881b8", 00:15:33.505 "is_configured": true, 00:15:33.505 "data_offset": 2048, 00:15:33.505 "data_size": 63488 00:15:33.505 }, 00:15:33.505 { 00:15:33.505 "name": "BaseBdev2", 00:15:33.505 "uuid": "6bc7e5bb-51f1-444f-8b08-4659a4224fae", 00:15:33.505 "is_configured": true, 00:15:33.505 "data_offset": 2048, 00:15:33.505 "data_size": 63488 00:15:33.505 }, 00:15:33.505 { 00:15:33.505 "name": "BaseBdev3", 00:15:33.505 "uuid": "74503c3c-03aa-4c14-917f-583b3214af4b", 00:15:33.505 "is_configured": true, 00:15:33.505 "data_offset": 2048, 00:15:33.505 "data_size": 63488 00:15:33.505 } 00:15:33.505 ] 00:15:33.505 }' 00:15:33.505 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.505 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.074 [2024-10-21 10:00:10.463413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.074 "name": "Existed_Raid", 00:15:34.074 "aliases": [ 00:15:34.074 "8dde81e4-1262-41a5-bb04-a36498219b41" 00:15:34.074 ], 00:15:34.074 "product_name": "Raid Volume", 00:15:34.074 "block_size": 512, 00:15:34.074 "num_blocks": 126976, 00:15:34.074 "uuid": "8dde81e4-1262-41a5-bb04-a36498219b41", 00:15:34.074 "assigned_rate_limits": { 00:15:34.074 "rw_ios_per_sec": 0, 00:15:34.074 "rw_mbytes_per_sec": 0, 00:15:34.074 "r_mbytes_per_sec": 0, 00:15:34.074 "w_mbytes_per_sec": 0 00:15:34.074 }, 00:15:34.074 "claimed": false, 00:15:34.074 "zoned": false, 00:15:34.074 "supported_io_types": { 00:15:34.074 "read": true, 00:15:34.074 "write": true, 00:15:34.074 "unmap": false, 00:15:34.074 "flush": false, 00:15:34.074 "reset": true, 00:15:34.074 "nvme_admin": false, 00:15:34.074 "nvme_io": false, 00:15:34.074 "nvme_io_md": false, 00:15:34.074 "write_zeroes": true, 00:15:34.074 "zcopy": false, 00:15:34.074 "get_zone_info": false, 00:15:34.074 "zone_management": false, 00:15:34.074 "zone_append": false, 00:15:34.074 "compare": false, 00:15:34.074 "compare_and_write": false, 00:15:34.074 "abort": false, 00:15:34.074 "seek_hole": false, 00:15:34.074 "seek_data": false, 00:15:34.074 "copy": false, 00:15:34.074 "nvme_iov_md": false 00:15:34.074 }, 00:15:34.074 "driver_specific": { 00:15:34.074 "raid": { 00:15:34.074 "uuid": "8dde81e4-1262-41a5-bb04-a36498219b41", 00:15:34.074 "strip_size_kb": 64, 00:15:34.074 "state": "online", 00:15:34.074 "raid_level": "raid5f", 00:15:34.074 "superblock": true, 00:15:34.074 "num_base_bdevs": 3, 00:15:34.074 "num_base_bdevs_discovered": 3, 00:15:34.074 "num_base_bdevs_operational": 3, 00:15:34.074 "base_bdevs_list": [ 00:15:34.074 { 00:15:34.074 "name": "BaseBdev1", 00:15:34.074 "uuid": "fa1b0588-8287-4d13-bd11-5832093881b8", 00:15:34.074 "is_configured": true, 00:15:34.074 "data_offset": 2048, 00:15:34.074 "data_size": 63488 00:15:34.074 }, 00:15:34.074 { 00:15:34.074 "name": "BaseBdev2", 00:15:34.074 "uuid": "6bc7e5bb-51f1-444f-8b08-4659a4224fae", 00:15:34.074 "is_configured": true, 00:15:34.074 "data_offset": 2048, 00:15:34.074 "data_size": 63488 00:15:34.074 }, 00:15:34.074 { 00:15:34.074 "name": "BaseBdev3", 00:15:34.074 "uuid": "74503c3c-03aa-4c14-917f-583b3214af4b", 00:15:34.074 "is_configured": true, 00:15:34.074 "data_offset": 2048, 00:15:34.074 "data_size": 63488 00:15:34.074 } 00:15:34.074 ] 00:15:34.074 } 00:15:34.074 } 00:15:34.074 }' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:34.074 BaseBdev2 00:15:34.074 BaseBdev3' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.074 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.333 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.333 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.334 [2024-10-21 10:00:10.714788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.334 "name": "Existed_Raid", 00:15:34.334 "uuid": "8dde81e4-1262-41a5-bb04-a36498219b41", 00:15:34.334 "strip_size_kb": 64, 00:15:34.334 "state": "online", 00:15:34.334 "raid_level": "raid5f", 00:15:34.334 "superblock": true, 00:15:34.334 "num_base_bdevs": 3, 00:15:34.334 "num_base_bdevs_discovered": 2, 00:15:34.334 "num_base_bdevs_operational": 2, 00:15:34.334 "base_bdevs_list": [ 00:15:34.334 { 00:15:34.334 "name": null, 00:15:34.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.334 "is_configured": false, 00:15:34.334 "data_offset": 0, 00:15:34.334 "data_size": 63488 00:15:34.334 }, 00:15:34.334 { 00:15:34.334 "name": "BaseBdev2", 00:15:34.334 "uuid": "6bc7e5bb-51f1-444f-8b08-4659a4224fae", 00:15:34.334 "is_configured": true, 00:15:34.334 "data_offset": 2048, 00:15:34.334 "data_size": 63488 00:15:34.334 }, 00:15:34.334 { 00:15:34.334 "name": "BaseBdev3", 00:15:34.334 "uuid": "74503c3c-03aa-4c14-917f-583b3214af4b", 00:15:34.334 "is_configured": true, 00:15:34.334 "data_offset": 2048, 00:15:34.334 "data_size": 63488 00:15:34.334 } 00:15:34.334 ] 00:15:34.334 }' 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.334 10:00:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.593 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:34.593 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:34.593 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:34.593 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.593 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.593 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.852 [2024-10-21 10:00:11.219042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.852 [2024-10-21 10:00:11.219225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.852 [2024-10-21 10:00:11.321910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:34.852 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.853 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.853 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.853 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:34.853 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.853 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:34.853 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.853 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.853 [2024-10-21 10:00:11.377855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:34.853 [2024-10-21 10:00:11.377925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.114 BaseBdev2 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.114 [ 00:15:35.114 { 00:15:35.114 "name": "BaseBdev2", 00:15:35.114 "aliases": [ 00:15:35.114 "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732" 00:15:35.114 ], 00:15:35.114 "product_name": "Malloc disk", 00:15:35.114 "block_size": 512, 00:15:35.114 "num_blocks": 65536, 00:15:35.114 "uuid": "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732", 00:15:35.114 "assigned_rate_limits": { 00:15:35.114 "rw_ios_per_sec": 0, 00:15:35.114 "rw_mbytes_per_sec": 0, 00:15:35.114 "r_mbytes_per_sec": 0, 00:15:35.114 "w_mbytes_per_sec": 0 00:15:35.114 }, 00:15:35.114 "claimed": false, 00:15:35.114 "zoned": false, 00:15:35.114 "supported_io_types": { 00:15:35.114 "read": true, 00:15:35.114 "write": true, 00:15:35.114 "unmap": true, 00:15:35.114 "flush": true, 00:15:35.114 "reset": true, 00:15:35.114 "nvme_admin": false, 00:15:35.114 "nvme_io": false, 00:15:35.114 "nvme_io_md": false, 00:15:35.114 "write_zeroes": true, 00:15:35.114 "zcopy": true, 00:15:35.114 "get_zone_info": false, 00:15:35.114 "zone_management": false, 00:15:35.114 "zone_append": false, 00:15:35.114 "compare": false, 00:15:35.114 "compare_and_write": false, 00:15:35.114 "abort": true, 00:15:35.114 "seek_hole": false, 00:15:35.114 "seek_data": false, 00:15:35.114 "copy": true, 00:15:35.114 "nvme_iov_md": false 00:15:35.114 }, 00:15:35.114 "memory_domains": [ 00:15:35.114 { 00:15:35.114 "dma_device_id": "system", 00:15:35.114 "dma_device_type": 1 00:15:35.114 }, 00:15:35.114 { 00:15:35.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.114 "dma_device_type": 2 00:15:35.114 } 00:15:35.114 ], 00:15:35.114 "driver_specific": {} 00:15:35.114 } 00:15:35.114 ] 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.114 BaseBdev3 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.114 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.114 [ 00:15:35.114 { 00:15:35.114 "name": "BaseBdev3", 00:15:35.114 "aliases": [ 00:15:35.114 "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6" 00:15:35.114 ], 00:15:35.114 "product_name": "Malloc disk", 00:15:35.114 "block_size": 512, 00:15:35.114 "num_blocks": 65536, 00:15:35.114 "uuid": "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6", 00:15:35.114 "assigned_rate_limits": { 00:15:35.114 "rw_ios_per_sec": 0, 00:15:35.114 "rw_mbytes_per_sec": 0, 00:15:35.114 "r_mbytes_per_sec": 0, 00:15:35.114 "w_mbytes_per_sec": 0 00:15:35.114 }, 00:15:35.114 "claimed": false, 00:15:35.114 "zoned": false, 00:15:35.114 "supported_io_types": { 00:15:35.114 "read": true, 00:15:35.114 "write": true, 00:15:35.114 "unmap": true, 00:15:35.114 "flush": true, 00:15:35.114 "reset": true, 00:15:35.115 "nvme_admin": false, 00:15:35.115 "nvme_io": false, 00:15:35.115 "nvme_io_md": false, 00:15:35.115 "write_zeroes": true, 00:15:35.115 "zcopy": true, 00:15:35.115 "get_zone_info": false, 00:15:35.115 "zone_management": false, 00:15:35.115 "zone_append": false, 00:15:35.115 "compare": false, 00:15:35.115 "compare_and_write": false, 00:15:35.115 "abort": true, 00:15:35.115 "seek_hole": false, 00:15:35.115 "seek_data": false, 00:15:35.115 "copy": true, 00:15:35.115 "nvme_iov_md": false 00:15:35.115 }, 00:15:35.115 "memory_domains": [ 00:15:35.115 { 00:15:35.115 "dma_device_id": "system", 00:15:35.115 "dma_device_type": 1 00:15:35.115 }, 00:15:35.115 { 00:15:35.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.115 "dma_device_type": 2 00:15:35.115 } 00:15:35.115 ], 00:15:35.115 "driver_specific": {} 00:15:35.115 } 00:15:35.115 ] 00:15:35.115 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.115 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:35.115 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:35.115 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:35.115 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:35.115 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.115 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.374 [2024-10-21 10:00:11.715185] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:35.374 [2024-10-21 10:00:11.715255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:35.374 [2024-10-21 10:00:11.715287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.374 [2024-10-21 10:00:11.717430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.374 "name": "Existed_Raid", 00:15:35.374 "uuid": "c702d020-85b1-4045-ac73-ce231a6e4bcf", 00:15:35.374 "strip_size_kb": 64, 00:15:35.374 "state": "configuring", 00:15:35.374 "raid_level": "raid5f", 00:15:35.374 "superblock": true, 00:15:35.374 "num_base_bdevs": 3, 00:15:35.374 "num_base_bdevs_discovered": 2, 00:15:35.374 "num_base_bdevs_operational": 3, 00:15:35.374 "base_bdevs_list": [ 00:15:35.374 { 00:15:35.374 "name": "BaseBdev1", 00:15:35.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.374 "is_configured": false, 00:15:35.374 "data_offset": 0, 00:15:35.374 "data_size": 0 00:15:35.374 }, 00:15:35.374 { 00:15:35.374 "name": "BaseBdev2", 00:15:35.374 "uuid": "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732", 00:15:35.374 "is_configured": true, 00:15:35.374 "data_offset": 2048, 00:15:35.374 "data_size": 63488 00:15:35.374 }, 00:15:35.374 { 00:15:35.374 "name": "BaseBdev3", 00:15:35.374 "uuid": "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6", 00:15:35.374 "is_configured": true, 00:15:35.374 "data_offset": 2048, 00:15:35.374 "data_size": 63488 00:15:35.374 } 00:15:35.374 ] 00:15:35.374 }' 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.374 10:00:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.634 [2024-10-21 10:00:12.162422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.634 "name": "Existed_Raid", 00:15:35.634 "uuid": "c702d020-85b1-4045-ac73-ce231a6e4bcf", 00:15:35.634 "strip_size_kb": 64, 00:15:35.634 "state": "configuring", 00:15:35.634 "raid_level": "raid5f", 00:15:35.634 "superblock": true, 00:15:35.634 "num_base_bdevs": 3, 00:15:35.634 "num_base_bdevs_discovered": 1, 00:15:35.634 "num_base_bdevs_operational": 3, 00:15:35.634 "base_bdevs_list": [ 00:15:35.634 { 00:15:35.634 "name": "BaseBdev1", 00:15:35.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.634 "is_configured": false, 00:15:35.634 "data_offset": 0, 00:15:35.634 "data_size": 0 00:15:35.634 }, 00:15:35.634 { 00:15:35.634 "name": null, 00:15:35.634 "uuid": "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732", 00:15:35.634 "is_configured": false, 00:15:35.634 "data_offset": 0, 00:15:35.634 "data_size": 63488 00:15:35.634 }, 00:15:35.634 { 00:15:35.634 "name": "BaseBdev3", 00:15:35.634 "uuid": "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6", 00:15:35.634 "is_configured": true, 00:15:35.634 "data_offset": 2048, 00:15:35.634 "data_size": 63488 00:15:35.634 } 00:15:35.634 ] 00:15:35.634 }' 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.634 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.204 [2024-10-21 10:00:12.622418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.204 BaseBdev1 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.204 [ 00:15:36.204 { 00:15:36.204 "name": "BaseBdev1", 00:15:36.204 "aliases": [ 00:15:36.204 "40f69082-4f67-4cff-abc6-ad2a7867a2e8" 00:15:36.204 ], 00:15:36.204 "product_name": "Malloc disk", 00:15:36.204 "block_size": 512, 00:15:36.204 "num_blocks": 65536, 00:15:36.204 "uuid": "40f69082-4f67-4cff-abc6-ad2a7867a2e8", 00:15:36.204 "assigned_rate_limits": { 00:15:36.204 "rw_ios_per_sec": 0, 00:15:36.204 "rw_mbytes_per_sec": 0, 00:15:36.204 "r_mbytes_per_sec": 0, 00:15:36.204 "w_mbytes_per_sec": 0 00:15:36.204 }, 00:15:36.204 "claimed": true, 00:15:36.204 "claim_type": "exclusive_write", 00:15:36.204 "zoned": false, 00:15:36.204 "supported_io_types": { 00:15:36.204 "read": true, 00:15:36.204 "write": true, 00:15:36.204 "unmap": true, 00:15:36.204 "flush": true, 00:15:36.204 "reset": true, 00:15:36.204 "nvme_admin": false, 00:15:36.204 "nvme_io": false, 00:15:36.204 "nvme_io_md": false, 00:15:36.204 "write_zeroes": true, 00:15:36.204 "zcopy": true, 00:15:36.204 "get_zone_info": false, 00:15:36.204 "zone_management": false, 00:15:36.204 "zone_append": false, 00:15:36.204 "compare": false, 00:15:36.204 "compare_and_write": false, 00:15:36.204 "abort": true, 00:15:36.204 "seek_hole": false, 00:15:36.204 "seek_data": false, 00:15:36.204 "copy": true, 00:15:36.204 "nvme_iov_md": false 00:15:36.204 }, 00:15:36.204 "memory_domains": [ 00:15:36.204 { 00:15:36.204 "dma_device_id": "system", 00:15:36.204 "dma_device_type": 1 00:15:36.204 }, 00:15:36.204 { 00:15:36.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.204 "dma_device_type": 2 00:15:36.204 } 00:15:36.204 ], 00:15:36.204 "driver_specific": {} 00:15:36.204 } 00:15:36.204 ] 00:15:36.204 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.205 "name": "Existed_Raid", 00:15:36.205 "uuid": "c702d020-85b1-4045-ac73-ce231a6e4bcf", 00:15:36.205 "strip_size_kb": 64, 00:15:36.205 "state": "configuring", 00:15:36.205 "raid_level": "raid5f", 00:15:36.205 "superblock": true, 00:15:36.205 "num_base_bdevs": 3, 00:15:36.205 "num_base_bdevs_discovered": 2, 00:15:36.205 "num_base_bdevs_operational": 3, 00:15:36.205 "base_bdevs_list": [ 00:15:36.205 { 00:15:36.205 "name": "BaseBdev1", 00:15:36.205 "uuid": "40f69082-4f67-4cff-abc6-ad2a7867a2e8", 00:15:36.205 "is_configured": true, 00:15:36.205 "data_offset": 2048, 00:15:36.205 "data_size": 63488 00:15:36.205 }, 00:15:36.205 { 00:15:36.205 "name": null, 00:15:36.205 "uuid": "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732", 00:15:36.205 "is_configured": false, 00:15:36.205 "data_offset": 0, 00:15:36.205 "data_size": 63488 00:15:36.205 }, 00:15:36.205 { 00:15:36.205 "name": "BaseBdev3", 00:15:36.205 "uuid": "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6", 00:15:36.205 "is_configured": true, 00:15:36.205 "data_offset": 2048, 00:15:36.205 "data_size": 63488 00:15:36.205 } 00:15:36.205 ] 00:15:36.205 }' 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.205 10:00:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.774 [2024-10-21 10:00:13.201508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.774 "name": "Existed_Raid", 00:15:36.774 "uuid": "c702d020-85b1-4045-ac73-ce231a6e4bcf", 00:15:36.774 "strip_size_kb": 64, 00:15:36.774 "state": "configuring", 00:15:36.774 "raid_level": "raid5f", 00:15:36.774 "superblock": true, 00:15:36.774 "num_base_bdevs": 3, 00:15:36.774 "num_base_bdevs_discovered": 1, 00:15:36.774 "num_base_bdevs_operational": 3, 00:15:36.774 "base_bdevs_list": [ 00:15:36.774 { 00:15:36.774 "name": "BaseBdev1", 00:15:36.774 "uuid": "40f69082-4f67-4cff-abc6-ad2a7867a2e8", 00:15:36.774 "is_configured": true, 00:15:36.774 "data_offset": 2048, 00:15:36.774 "data_size": 63488 00:15:36.774 }, 00:15:36.774 { 00:15:36.774 "name": null, 00:15:36.774 "uuid": "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732", 00:15:36.774 "is_configured": false, 00:15:36.774 "data_offset": 0, 00:15:36.774 "data_size": 63488 00:15:36.774 }, 00:15:36.774 { 00:15:36.774 "name": null, 00:15:36.774 "uuid": "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6", 00:15:36.774 "is_configured": false, 00:15:36.774 "data_offset": 0, 00:15:36.774 "data_size": 63488 00:15:36.774 } 00:15:36.774 ] 00:15:36.774 }' 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.774 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.343 [2024-10-21 10:00:13.724614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.343 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.343 "name": "Existed_Raid", 00:15:37.343 "uuid": "c702d020-85b1-4045-ac73-ce231a6e4bcf", 00:15:37.343 "strip_size_kb": 64, 00:15:37.343 "state": "configuring", 00:15:37.343 "raid_level": "raid5f", 00:15:37.343 "superblock": true, 00:15:37.343 "num_base_bdevs": 3, 00:15:37.343 "num_base_bdevs_discovered": 2, 00:15:37.343 "num_base_bdevs_operational": 3, 00:15:37.343 "base_bdevs_list": [ 00:15:37.343 { 00:15:37.343 "name": "BaseBdev1", 00:15:37.343 "uuid": "40f69082-4f67-4cff-abc6-ad2a7867a2e8", 00:15:37.343 "is_configured": true, 00:15:37.343 "data_offset": 2048, 00:15:37.343 "data_size": 63488 00:15:37.343 }, 00:15:37.343 { 00:15:37.343 "name": null, 00:15:37.343 "uuid": "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732", 00:15:37.343 "is_configured": false, 00:15:37.343 "data_offset": 0, 00:15:37.343 "data_size": 63488 00:15:37.343 }, 00:15:37.343 { 00:15:37.343 "name": "BaseBdev3", 00:15:37.343 "uuid": "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6", 00:15:37.343 "is_configured": true, 00:15:37.343 "data_offset": 2048, 00:15:37.344 "data_size": 63488 00:15:37.344 } 00:15:37.344 ] 00:15:37.344 }' 00:15:37.344 10:00:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.344 10:00:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.603 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.603 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.603 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.603 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:37.603 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.603 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:37.603 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:37.603 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.603 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.603 [2024-10-21 10:00:14.179827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.863 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.863 "name": "Existed_Raid", 00:15:37.863 "uuid": "c702d020-85b1-4045-ac73-ce231a6e4bcf", 00:15:37.863 "strip_size_kb": 64, 00:15:37.863 "state": "configuring", 00:15:37.863 "raid_level": "raid5f", 00:15:37.863 "superblock": true, 00:15:37.863 "num_base_bdevs": 3, 00:15:37.863 "num_base_bdevs_discovered": 1, 00:15:37.863 "num_base_bdevs_operational": 3, 00:15:37.863 "base_bdevs_list": [ 00:15:37.863 { 00:15:37.864 "name": null, 00:15:37.864 "uuid": "40f69082-4f67-4cff-abc6-ad2a7867a2e8", 00:15:37.864 "is_configured": false, 00:15:37.864 "data_offset": 0, 00:15:37.864 "data_size": 63488 00:15:37.864 }, 00:15:37.864 { 00:15:37.864 "name": null, 00:15:37.864 "uuid": "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732", 00:15:37.864 "is_configured": false, 00:15:37.864 "data_offset": 0, 00:15:37.864 "data_size": 63488 00:15:37.864 }, 00:15:37.864 { 00:15:37.864 "name": "BaseBdev3", 00:15:37.864 "uuid": "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6", 00:15:37.864 "is_configured": true, 00:15:37.864 "data_offset": 2048, 00:15:37.864 "data_size": 63488 00:15:37.864 } 00:15:37.864 ] 00:15:37.864 }' 00:15:37.864 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.864 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.434 [2024-10-21 10:00:14.788933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.434 "name": "Existed_Raid", 00:15:38.434 "uuid": "c702d020-85b1-4045-ac73-ce231a6e4bcf", 00:15:38.434 "strip_size_kb": 64, 00:15:38.434 "state": "configuring", 00:15:38.434 "raid_level": "raid5f", 00:15:38.434 "superblock": true, 00:15:38.434 "num_base_bdevs": 3, 00:15:38.434 "num_base_bdevs_discovered": 2, 00:15:38.434 "num_base_bdevs_operational": 3, 00:15:38.434 "base_bdevs_list": [ 00:15:38.434 { 00:15:38.434 "name": null, 00:15:38.434 "uuid": "40f69082-4f67-4cff-abc6-ad2a7867a2e8", 00:15:38.434 "is_configured": false, 00:15:38.434 "data_offset": 0, 00:15:38.434 "data_size": 63488 00:15:38.434 }, 00:15:38.434 { 00:15:38.434 "name": "BaseBdev2", 00:15:38.434 "uuid": "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732", 00:15:38.434 "is_configured": true, 00:15:38.434 "data_offset": 2048, 00:15:38.434 "data_size": 63488 00:15:38.434 }, 00:15:38.434 { 00:15:38.434 "name": "BaseBdev3", 00:15:38.434 "uuid": "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6", 00:15:38.434 "is_configured": true, 00:15:38.434 "data_offset": 2048, 00:15:38.434 "data_size": 63488 00:15:38.434 } 00:15:38.434 ] 00:15:38.434 }' 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.434 10:00:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.693 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.693 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.693 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.693 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:38.693 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.693 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:38.693 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.693 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.694 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.694 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:38.694 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.694 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 40f69082-4f67-4cff-abc6-ad2a7867a2e8 00:15:38.694 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.694 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.953 [2024-10-21 10:00:15.321653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:38.953 [2024-10-21 10:00:15.321926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:38.953 [2024-10-21 10:00:15.321943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:38.953 [2024-10-21 10:00:15.322241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:38.953 NewBaseBdev 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.953 [2024-10-21 10:00:15.328424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:38.953 [2024-10-21 10:00:15.328451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:15:38.953 [2024-10-21 10:00:15.328642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.953 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.953 [ 00:15:38.953 { 00:15:38.953 "name": "NewBaseBdev", 00:15:38.953 "aliases": [ 00:15:38.953 "40f69082-4f67-4cff-abc6-ad2a7867a2e8" 00:15:38.953 ], 00:15:38.953 "product_name": "Malloc disk", 00:15:38.953 "block_size": 512, 00:15:38.953 "num_blocks": 65536, 00:15:38.953 "uuid": "40f69082-4f67-4cff-abc6-ad2a7867a2e8", 00:15:38.953 "assigned_rate_limits": { 00:15:38.953 "rw_ios_per_sec": 0, 00:15:38.953 "rw_mbytes_per_sec": 0, 00:15:38.953 "r_mbytes_per_sec": 0, 00:15:38.953 "w_mbytes_per_sec": 0 00:15:38.953 }, 00:15:38.953 "claimed": true, 00:15:38.953 "claim_type": "exclusive_write", 00:15:38.953 "zoned": false, 00:15:38.953 "supported_io_types": { 00:15:38.953 "read": true, 00:15:38.953 "write": true, 00:15:38.953 "unmap": true, 00:15:38.953 "flush": true, 00:15:38.953 "reset": true, 00:15:38.953 "nvme_admin": false, 00:15:38.953 "nvme_io": false, 00:15:38.953 "nvme_io_md": false, 00:15:38.953 "write_zeroes": true, 00:15:38.953 "zcopy": true, 00:15:38.953 "get_zone_info": false, 00:15:38.953 "zone_management": false, 00:15:38.953 "zone_append": false, 00:15:38.953 "compare": false, 00:15:38.953 "compare_and_write": false, 00:15:38.953 "abort": true, 00:15:38.953 "seek_hole": false, 00:15:38.953 "seek_data": false, 00:15:38.953 "copy": true, 00:15:38.954 "nvme_iov_md": false 00:15:38.954 }, 00:15:38.954 "memory_domains": [ 00:15:38.954 { 00:15:38.954 "dma_device_id": "system", 00:15:38.954 "dma_device_type": 1 00:15:38.954 }, 00:15:38.954 { 00:15:38.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.954 "dma_device_type": 2 00:15:38.954 } 00:15:38.954 ], 00:15:38.954 "driver_specific": {} 00:15:38.954 } 00:15:38.954 ] 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.954 "name": "Existed_Raid", 00:15:38.954 "uuid": "c702d020-85b1-4045-ac73-ce231a6e4bcf", 00:15:38.954 "strip_size_kb": 64, 00:15:38.954 "state": "online", 00:15:38.954 "raid_level": "raid5f", 00:15:38.954 "superblock": true, 00:15:38.954 "num_base_bdevs": 3, 00:15:38.954 "num_base_bdevs_discovered": 3, 00:15:38.954 "num_base_bdevs_operational": 3, 00:15:38.954 "base_bdevs_list": [ 00:15:38.954 { 00:15:38.954 "name": "NewBaseBdev", 00:15:38.954 "uuid": "40f69082-4f67-4cff-abc6-ad2a7867a2e8", 00:15:38.954 "is_configured": true, 00:15:38.954 "data_offset": 2048, 00:15:38.954 "data_size": 63488 00:15:38.954 }, 00:15:38.954 { 00:15:38.954 "name": "BaseBdev2", 00:15:38.954 "uuid": "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732", 00:15:38.954 "is_configured": true, 00:15:38.954 "data_offset": 2048, 00:15:38.954 "data_size": 63488 00:15:38.954 }, 00:15:38.954 { 00:15:38.954 "name": "BaseBdev3", 00:15:38.954 "uuid": "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6", 00:15:38.954 "is_configured": true, 00:15:38.954 "data_offset": 2048, 00:15:38.954 "data_size": 63488 00:15:38.954 } 00:15:38.954 ] 00:15:38.954 }' 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.954 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:39.523 [2024-10-21 10:00:15.831040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:39.523 "name": "Existed_Raid", 00:15:39.523 "aliases": [ 00:15:39.523 "c702d020-85b1-4045-ac73-ce231a6e4bcf" 00:15:39.523 ], 00:15:39.523 "product_name": "Raid Volume", 00:15:39.523 "block_size": 512, 00:15:39.523 "num_blocks": 126976, 00:15:39.523 "uuid": "c702d020-85b1-4045-ac73-ce231a6e4bcf", 00:15:39.523 "assigned_rate_limits": { 00:15:39.523 "rw_ios_per_sec": 0, 00:15:39.523 "rw_mbytes_per_sec": 0, 00:15:39.523 "r_mbytes_per_sec": 0, 00:15:39.523 "w_mbytes_per_sec": 0 00:15:39.523 }, 00:15:39.523 "claimed": false, 00:15:39.523 "zoned": false, 00:15:39.523 "supported_io_types": { 00:15:39.523 "read": true, 00:15:39.523 "write": true, 00:15:39.523 "unmap": false, 00:15:39.523 "flush": false, 00:15:39.523 "reset": true, 00:15:39.523 "nvme_admin": false, 00:15:39.523 "nvme_io": false, 00:15:39.523 "nvme_io_md": false, 00:15:39.523 "write_zeroes": true, 00:15:39.523 "zcopy": false, 00:15:39.523 "get_zone_info": false, 00:15:39.523 "zone_management": false, 00:15:39.523 "zone_append": false, 00:15:39.523 "compare": false, 00:15:39.523 "compare_and_write": false, 00:15:39.523 "abort": false, 00:15:39.523 "seek_hole": false, 00:15:39.523 "seek_data": false, 00:15:39.523 "copy": false, 00:15:39.523 "nvme_iov_md": false 00:15:39.523 }, 00:15:39.523 "driver_specific": { 00:15:39.523 "raid": { 00:15:39.523 "uuid": "c702d020-85b1-4045-ac73-ce231a6e4bcf", 00:15:39.523 "strip_size_kb": 64, 00:15:39.523 "state": "online", 00:15:39.523 "raid_level": "raid5f", 00:15:39.523 "superblock": true, 00:15:39.523 "num_base_bdevs": 3, 00:15:39.523 "num_base_bdevs_discovered": 3, 00:15:39.523 "num_base_bdevs_operational": 3, 00:15:39.523 "base_bdevs_list": [ 00:15:39.523 { 00:15:39.523 "name": "NewBaseBdev", 00:15:39.523 "uuid": "40f69082-4f67-4cff-abc6-ad2a7867a2e8", 00:15:39.523 "is_configured": true, 00:15:39.523 "data_offset": 2048, 00:15:39.523 "data_size": 63488 00:15:39.523 }, 00:15:39.523 { 00:15:39.523 "name": "BaseBdev2", 00:15:39.523 "uuid": "d90bc9f2-7ae9-4a1d-bb88-3c0fc4ebc732", 00:15:39.523 "is_configured": true, 00:15:39.523 "data_offset": 2048, 00:15:39.523 "data_size": 63488 00:15:39.523 }, 00:15:39.523 { 00:15:39.523 "name": "BaseBdev3", 00:15:39.523 "uuid": "81b54ed0-572c-4e70-bd03-26a3d0ec6ed6", 00:15:39.523 "is_configured": true, 00:15:39.523 "data_offset": 2048, 00:15:39.523 "data_size": 63488 00:15:39.523 } 00:15:39.523 ] 00:15:39.523 } 00:15:39.523 } 00:15:39.523 }' 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.523 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:39.523 BaseBdev2 00:15:39.523 BaseBdev3' 00:15:39.524 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.524 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:39.524 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.524 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:39.524 10:00:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.524 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.524 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.524 10:00:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.524 [2024-10-21 10:00:16.110302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.524 [2024-10-21 10:00:16.110339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.524 [2024-10-21 10:00:16.110434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.524 [2024-10-21 10:00:16.110770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.524 [2024-10-21 10:00:16.110794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80152 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80152 ']' 00:15:39.524 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80152 00:15:39.783 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:39.783 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.783 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80152 00:15:39.783 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.783 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.783 killing process with pid 80152 00:15:39.783 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80152' 00:15:39.783 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80152 00:15:39.783 [2024-10-21 10:00:16.154403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.783 10:00:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80152 00:15:40.043 [2024-10-21 10:00:16.482445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.426 10:00:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:41.426 00:15:41.426 real 0m10.738s 00:15:41.426 user 0m16.714s 00:15:41.426 sys 0m2.121s 00:15:41.426 10:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.426 10:00:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.426 ************************************ 00:15:41.426 END TEST raid5f_state_function_test_sb 00:15:41.426 ************************************ 00:15:41.426 10:00:17 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:41.426 10:00:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:41.426 10:00:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.426 10:00:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.426 ************************************ 00:15:41.426 START TEST raid5f_superblock_test 00:15:41.426 ************************************ 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80781 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80781 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 80781 ']' 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.426 10:00:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.426 [2024-10-21 10:00:17.875532] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:15:41.426 [2024-10-21 10:00:17.875677] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80781 ] 00:15:41.685 [2024-10-21 10:00:18.035937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.685 [2024-10-21 10:00:18.180906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.945 [2024-10-21 10:00:18.430726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.945 [2024-10-21 10:00:18.430784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.204 malloc1 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.204 [2024-10-21 10:00:18.771773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:42.204 [2024-10-21 10:00:18.771873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.204 [2024-10-21 10:00:18.771906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:15:42.204 [2024-10-21 10:00:18.771917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.204 [2024-10-21 10:00:18.774597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.204 [2024-10-21 10:00:18.774634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:42.204 pt1 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.204 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.464 malloc2 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.464 [2024-10-21 10:00:18.836407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:42.464 [2024-10-21 10:00:18.836479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.464 [2024-10-21 10:00:18.836508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:15:42.464 [2024-10-21 10:00:18.836518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.464 [2024-10-21 10:00:18.839125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.464 [2024-10-21 10:00:18.839162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:42.464 pt2 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.464 malloc3 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.464 [2024-10-21 10:00:18.919112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:42.464 [2024-10-21 10:00:18.919185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.464 [2024-10-21 10:00:18.919211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:42.464 [2024-10-21 10:00:18.919222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.464 [2024-10-21 10:00:18.921836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.464 [2024-10-21 10:00:18.921872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:42.464 pt3 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.464 [2024-10-21 10:00:18.931144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:42.464 [2024-10-21 10:00:18.933431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:42.464 [2024-10-21 10:00:18.933505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:42.464 [2024-10-21 10:00:18.933693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:15:42.464 [2024-10-21 10:00:18.933708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:42.464 [2024-10-21 10:00:18.934008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:42.464 [2024-10-21 10:00:18.940714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:15:42.464 [2024-10-21 10:00:18.940737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:15:42.464 [2024-10-21 10:00:18.940944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.464 "name": "raid_bdev1", 00:15:42.464 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:42.464 "strip_size_kb": 64, 00:15:42.464 "state": "online", 00:15:42.464 "raid_level": "raid5f", 00:15:42.464 "superblock": true, 00:15:42.464 "num_base_bdevs": 3, 00:15:42.464 "num_base_bdevs_discovered": 3, 00:15:42.464 "num_base_bdevs_operational": 3, 00:15:42.464 "base_bdevs_list": [ 00:15:42.464 { 00:15:42.464 "name": "pt1", 00:15:42.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:42.464 "is_configured": true, 00:15:42.464 "data_offset": 2048, 00:15:42.464 "data_size": 63488 00:15:42.464 }, 00:15:42.464 { 00:15:42.464 "name": "pt2", 00:15:42.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.464 "is_configured": true, 00:15:42.464 "data_offset": 2048, 00:15:42.464 "data_size": 63488 00:15:42.464 }, 00:15:42.464 { 00:15:42.464 "name": "pt3", 00:15:42.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.464 "is_configured": true, 00:15:42.464 "data_offset": 2048, 00:15:42.464 "data_size": 63488 00:15:42.464 } 00:15:42.464 ] 00:15:42.464 }' 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.464 10:00:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.033 [2024-10-21 10:00:19.392197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.033 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:43.033 "name": "raid_bdev1", 00:15:43.033 "aliases": [ 00:15:43.033 "59e63cbe-a70b-4797-9b23-f302c41a4b47" 00:15:43.033 ], 00:15:43.033 "product_name": "Raid Volume", 00:15:43.033 "block_size": 512, 00:15:43.033 "num_blocks": 126976, 00:15:43.033 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:43.033 "assigned_rate_limits": { 00:15:43.033 "rw_ios_per_sec": 0, 00:15:43.033 "rw_mbytes_per_sec": 0, 00:15:43.033 "r_mbytes_per_sec": 0, 00:15:43.033 "w_mbytes_per_sec": 0 00:15:43.033 }, 00:15:43.033 "claimed": false, 00:15:43.033 "zoned": false, 00:15:43.033 "supported_io_types": { 00:15:43.033 "read": true, 00:15:43.033 "write": true, 00:15:43.033 "unmap": false, 00:15:43.033 "flush": false, 00:15:43.033 "reset": true, 00:15:43.033 "nvme_admin": false, 00:15:43.033 "nvme_io": false, 00:15:43.033 "nvme_io_md": false, 00:15:43.033 "write_zeroes": true, 00:15:43.033 "zcopy": false, 00:15:43.033 "get_zone_info": false, 00:15:43.033 "zone_management": false, 00:15:43.033 "zone_append": false, 00:15:43.033 "compare": false, 00:15:43.033 "compare_and_write": false, 00:15:43.033 "abort": false, 00:15:43.033 "seek_hole": false, 00:15:43.033 "seek_data": false, 00:15:43.033 "copy": false, 00:15:43.033 "nvme_iov_md": false 00:15:43.033 }, 00:15:43.033 "driver_specific": { 00:15:43.033 "raid": { 00:15:43.033 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:43.033 "strip_size_kb": 64, 00:15:43.033 "state": "online", 00:15:43.033 "raid_level": "raid5f", 00:15:43.033 "superblock": true, 00:15:43.033 "num_base_bdevs": 3, 00:15:43.033 "num_base_bdevs_discovered": 3, 00:15:43.033 "num_base_bdevs_operational": 3, 00:15:43.033 "base_bdevs_list": [ 00:15:43.033 { 00:15:43.033 "name": "pt1", 00:15:43.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.033 "is_configured": true, 00:15:43.033 "data_offset": 2048, 00:15:43.033 "data_size": 63488 00:15:43.033 }, 00:15:43.033 { 00:15:43.033 "name": "pt2", 00:15:43.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.033 "is_configured": true, 00:15:43.033 "data_offset": 2048, 00:15:43.033 "data_size": 63488 00:15:43.033 }, 00:15:43.033 { 00:15:43.034 "name": "pt3", 00:15:43.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.034 "is_configured": true, 00:15:43.034 "data_offset": 2048, 00:15:43.034 "data_size": 63488 00:15:43.034 } 00:15:43.034 ] 00:15:43.034 } 00:15:43.034 } 00:15:43.034 }' 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:43.034 pt2 00:15:43.034 pt3' 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.034 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.293 [2024-10-21 10:00:19.643641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=59e63cbe-a70b-4797-9b23-f302c41a4b47 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 59e63cbe-a70b-4797-9b23-f302c41a4b47 ']' 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.293 [2024-10-21 10:00:19.671411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.293 [2024-10-21 10:00:19.671443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.293 [2024-10-21 10:00:19.671528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.293 [2024-10-21 10:00:19.671625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.293 [2024-10-21 10:00:19.671635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.293 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.293 [2024-10-21 10:00:19.803253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:43.293 [2024-10-21 10:00:19.805443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:43.293 [2024-10-21 10:00:19.805504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:43.293 [2024-10-21 10:00:19.805558] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:43.294 [2024-10-21 10:00:19.805619] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:43.294 [2024-10-21 10:00:19.805637] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:43.294 [2024-10-21 10:00:19.805655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.294 [2024-10-21 10:00:19.805664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:15:43.294 request: 00:15:43.294 { 00:15:43.294 "name": "raid_bdev1", 00:15:43.294 "raid_level": "raid5f", 00:15:43.294 "base_bdevs": [ 00:15:43.294 "malloc1", 00:15:43.294 "malloc2", 00:15:43.294 "malloc3" 00:15:43.294 ], 00:15:43.294 "strip_size_kb": 64, 00:15:43.294 "superblock": false, 00:15:43.294 "method": "bdev_raid_create", 00:15:43.294 "req_id": 1 00:15:43.294 } 00:15:43.294 Got JSON-RPC error response 00:15:43.294 response: 00:15:43.294 { 00:15:43.294 "code": -17, 00:15:43.294 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:43.294 } 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.294 [2024-10-21 10:00:19.863085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:43.294 [2024-10-21 10:00:19.863131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.294 [2024-10-21 10:00:19.863155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:43.294 [2024-10-21 10:00:19.863164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.294 [2024-10-21 10:00:19.865792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.294 [2024-10-21 10:00:19.865820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:43.294 [2024-10-21 10:00:19.865903] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:43.294 [2024-10-21 10:00:19.865962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.294 pt1 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.294 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.553 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.553 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.553 "name": "raid_bdev1", 00:15:43.553 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:43.553 "strip_size_kb": 64, 00:15:43.553 "state": "configuring", 00:15:43.553 "raid_level": "raid5f", 00:15:43.553 "superblock": true, 00:15:43.553 "num_base_bdevs": 3, 00:15:43.553 "num_base_bdevs_discovered": 1, 00:15:43.553 "num_base_bdevs_operational": 3, 00:15:43.553 "base_bdevs_list": [ 00:15:43.553 { 00:15:43.553 "name": "pt1", 00:15:43.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.553 "is_configured": true, 00:15:43.553 "data_offset": 2048, 00:15:43.553 "data_size": 63488 00:15:43.553 }, 00:15:43.553 { 00:15:43.553 "name": null, 00:15:43.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.553 "is_configured": false, 00:15:43.553 "data_offset": 2048, 00:15:43.553 "data_size": 63488 00:15:43.553 }, 00:15:43.553 { 00:15:43.553 "name": null, 00:15:43.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.553 "is_configured": false, 00:15:43.553 "data_offset": 2048, 00:15:43.553 "data_size": 63488 00:15:43.553 } 00:15:43.553 ] 00:15:43.553 }' 00:15:43.553 10:00:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.553 10:00:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.814 [2024-10-21 10:00:20.306442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.814 [2024-10-21 10:00:20.306540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.814 [2024-10-21 10:00:20.306604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:43.814 [2024-10-21 10:00:20.306618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.814 [2024-10-21 10:00:20.307186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.814 [2024-10-21 10:00:20.307204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.814 [2024-10-21 10:00:20.307319] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:43.814 [2024-10-21 10:00:20.307346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.814 pt2 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.814 [2024-10-21 10:00:20.318475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.814 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.815 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.815 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.815 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.815 "name": "raid_bdev1", 00:15:43.815 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:43.815 "strip_size_kb": 64, 00:15:43.815 "state": "configuring", 00:15:43.815 "raid_level": "raid5f", 00:15:43.815 "superblock": true, 00:15:43.815 "num_base_bdevs": 3, 00:15:43.815 "num_base_bdevs_discovered": 1, 00:15:43.815 "num_base_bdevs_operational": 3, 00:15:43.815 "base_bdevs_list": [ 00:15:43.815 { 00:15:43.815 "name": "pt1", 00:15:43.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.815 "is_configured": true, 00:15:43.815 "data_offset": 2048, 00:15:43.815 "data_size": 63488 00:15:43.815 }, 00:15:43.815 { 00:15:43.815 "name": null, 00:15:43.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.815 "is_configured": false, 00:15:43.815 "data_offset": 0, 00:15:43.815 "data_size": 63488 00:15:43.815 }, 00:15:43.815 { 00:15:43.815 "name": null, 00:15:43.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.815 "is_configured": false, 00:15:43.815 "data_offset": 2048, 00:15:43.815 "data_size": 63488 00:15:43.815 } 00:15:43.815 ] 00:15:43.815 }' 00:15:43.815 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.815 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.385 [2024-10-21 10:00:20.781576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:44.385 [2024-10-21 10:00:20.781652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.385 [2024-10-21 10:00:20.781674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:44.385 [2024-10-21 10:00:20.781686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.385 [2024-10-21 10:00:20.782237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.385 [2024-10-21 10:00:20.782260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:44.385 [2024-10-21 10:00:20.782362] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:44.385 [2024-10-21 10:00:20.782393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:44.385 pt2 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.385 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.386 [2024-10-21 10:00:20.793523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:44.386 [2024-10-21 10:00:20.793590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.386 [2024-10-21 10:00:20.793606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:44.386 [2024-10-21 10:00:20.793617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.386 [2024-10-21 10:00:20.794043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.386 [2024-10-21 10:00:20.794073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:44.386 [2024-10-21 10:00:20.794139] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:44.386 [2024-10-21 10:00:20.794162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:44.386 [2024-10-21 10:00:20.794285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:44.386 [2024-10-21 10:00:20.794304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:44.386 [2024-10-21 10:00:20.794592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:44.386 [2024-10-21 10:00:20.800256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:44.386 [2024-10-21 10:00:20.800282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:44.386 [2024-10-21 10:00:20.800485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.386 pt3 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.386 "name": "raid_bdev1", 00:15:44.386 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:44.386 "strip_size_kb": 64, 00:15:44.386 "state": "online", 00:15:44.386 "raid_level": "raid5f", 00:15:44.386 "superblock": true, 00:15:44.386 "num_base_bdevs": 3, 00:15:44.386 "num_base_bdevs_discovered": 3, 00:15:44.386 "num_base_bdevs_operational": 3, 00:15:44.386 "base_bdevs_list": [ 00:15:44.386 { 00:15:44.386 "name": "pt1", 00:15:44.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.386 "is_configured": true, 00:15:44.386 "data_offset": 2048, 00:15:44.386 "data_size": 63488 00:15:44.386 }, 00:15:44.386 { 00:15:44.386 "name": "pt2", 00:15:44.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.386 "is_configured": true, 00:15:44.386 "data_offset": 2048, 00:15:44.386 "data_size": 63488 00:15:44.386 }, 00:15:44.386 { 00:15:44.386 "name": "pt3", 00:15:44.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.386 "is_configured": true, 00:15:44.386 "data_offset": 2048, 00:15:44.386 "data_size": 63488 00:15:44.386 } 00:15:44.386 ] 00:15:44.386 }' 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.386 10:00:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:44.957 [2024-10-21 10:00:21.263349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.957 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:44.957 "name": "raid_bdev1", 00:15:44.957 "aliases": [ 00:15:44.957 "59e63cbe-a70b-4797-9b23-f302c41a4b47" 00:15:44.957 ], 00:15:44.957 "product_name": "Raid Volume", 00:15:44.957 "block_size": 512, 00:15:44.957 "num_blocks": 126976, 00:15:44.957 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:44.957 "assigned_rate_limits": { 00:15:44.957 "rw_ios_per_sec": 0, 00:15:44.957 "rw_mbytes_per_sec": 0, 00:15:44.957 "r_mbytes_per_sec": 0, 00:15:44.957 "w_mbytes_per_sec": 0 00:15:44.957 }, 00:15:44.957 "claimed": false, 00:15:44.957 "zoned": false, 00:15:44.957 "supported_io_types": { 00:15:44.957 "read": true, 00:15:44.957 "write": true, 00:15:44.957 "unmap": false, 00:15:44.957 "flush": false, 00:15:44.957 "reset": true, 00:15:44.957 "nvme_admin": false, 00:15:44.957 "nvme_io": false, 00:15:44.957 "nvme_io_md": false, 00:15:44.957 "write_zeroes": true, 00:15:44.957 "zcopy": false, 00:15:44.957 "get_zone_info": false, 00:15:44.957 "zone_management": false, 00:15:44.957 "zone_append": false, 00:15:44.957 "compare": false, 00:15:44.957 "compare_and_write": false, 00:15:44.957 "abort": false, 00:15:44.957 "seek_hole": false, 00:15:44.957 "seek_data": false, 00:15:44.957 "copy": false, 00:15:44.957 "nvme_iov_md": false 00:15:44.957 }, 00:15:44.957 "driver_specific": { 00:15:44.957 "raid": { 00:15:44.957 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:44.957 "strip_size_kb": 64, 00:15:44.957 "state": "online", 00:15:44.957 "raid_level": "raid5f", 00:15:44.957 "superblock": true, 00:15:44.957 "num_base_bdevs": 3, 00:15:44.957 "num_base_bdevs_discovered": 3, 00:15:44.957 "num_base_bdevs_operational": 3, 00:15:44.957 "base_bdevs_list": [ 00:15:44.957 { 00:15:44.957 "name": "pt1", 00:15:44.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.957 "is_configured": true, 00:15:44.957 "data_offset": 2048, 00:15:44.957 "data_size": 63488 00:15:44.957 }, 00:15:44.957 { 00:15:44.957 "name": "pt2", 00:15:44.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.957 "is_configured": true, 00:15:44.957 "data_offset": 2048, 00:15:44.957 "data_size": 63488 00:15:44.957 }, 00:15:44.957 { 00:15:44.957 "name": "pt3", 00:15:44.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.957 "is_configured": true, 00:15:44.957 "data_offset": 2048, 00:15:44.958 "data_size": 63488 00:15:44.958 } 00:15:44.958 ] 00:15:44.958 } 00:15:44.958 } 00:15:44.958 }' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:44.958 pt2 00:15:44.958 pt3' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:44.958 [2024-10-21 10:00:21.522804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.958 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 59e63cbe-a70b-4797-9b23-f302c41a4b47 '!=' 59e63cbe-a70b-4797-9b23-f302c41a4b47 ']' 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.218 [2024-10-21 10:00:21.570604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.218 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.219 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.219 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.219 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.219 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.219 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.219 "name": "raid_bdev1", 00:15:45.219 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:45.219 "strip_size_kb": 64, 00:15:45.219 "state": "online", 00:15:45.219 "raid_level": "raid5f", 00:15:45.219 "superblock": true, 00:15:45.219 "num_base_bdevs": 3, 00:15:45.219 "num_base_bdevs_discovered": 2, 00:15:45.219 "num_base_bdevs_operational": 2, 00:15:45.219 "base_bdevs_list": [ 00:15:45.219 { 00:15:45.219 "name": null, 00:15:45.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.219 "is_configured": false, 00:15:45.219 "data_offset": 0, 00:15:45.219 "data_size": 63488 00:15:45.219 }, 00:15:45.219 { 00:15:45.219 "name": "pt2", 00:15:45.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.219 "is_configured": true, 00:15:45.219 "data_offset": 2048, 00:15:45.219 "data_size": 63488 00:15:45.219 }, 00:15:45.219 { 00:15:45.219 "name": "pt3", 00:15:45.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.219 "is_configured": true, 00:15:45.219 "data_offset": 2048, 00:15:45.219 "data_size": 63488 00:15:45.219 } 00:15:45.219 ] 00:15:45.219 }' 00:15:45.219 10:00:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.219 10:00:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.478 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:45.478 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.478 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.478 [2024-10-21 10:00:22.029849] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.478 [2024-10-21 10:00:22.029884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.478 [2024-10-21 10:00:22.030004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.478 [2024-10-21 10:00:22.030080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.478 [2024-10-21 10:00:22.030098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:45.478 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.478 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.478 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.478 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.478 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:45.478 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.739 [2024-10-21 10:00:22.113668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.739 [2024-10-21 10:00:22.113723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.739 [2024-10-21 10:00:22.113743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:45.739 [2024-10-21 10:00:22.113755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.739 [2024-10-21 10:00:22.116319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.739 [2024-10-21 10:00:22.116354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.739 [2024-10-21 10:00:22.116439] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:45.739 [2024-10-21 10:00:22.116493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.739 pt2 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.739 "name": "raid_bdev1", 00:15:45.739 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:45.739 "strip_size_kb": 64, 00:15:45.739 "state": "configuring", 00:15:45.739 "raid_level": "raid5f", 00:15:45.739 "superblock": true, 00:15:45.739 "num_base_bdevs": 3, 00:15:45.739 "num_base_bdevs_discovered": 1, 00:15:45.739 "num_base_bdevs_operational": 2, 00:15:45.739 "base_bdevs_list": [ 00:15:45.739 { 00:15:45.739 "name": null, 00:15:45.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.739 "is_configured": false, 00:15:45.739 "data_offset": 2048, 00:15:45.739 "data_size": 63488 00:15:45.739 }, 00:15:45.739 { 00:15:45.739 "name": "pt2", 00:15:45.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.739 "is_configured": true, 00:15:45.739 "data_offset": 2048, 00:15:45.739 "data_size": 63488 00:15:45.739 }, 00:15:45.739 { 00:15:45.739 "name": null, 00:15:45.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.739 "is_configured": false, 00:15:45.739 "data_offset": 2048, 00:15:45.739 "data_size": 63488 00:15:45.739 } 00:15:45.739 ] 00:15:45.739 }' 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.739 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.999 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:45.999 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:45.999 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:45.999 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:45.999 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.999 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.999 [2024-10-21 10:00:22.592882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:45.999 [2024-10-21 10:00:22.592954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.999 [2024-10-21 10:00:22.592984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:45.999 [2024-10-21 10:00:22.592997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.999 [2024-10-21 10:00:22.593549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.999 [2024-10-21 10:00:22.593570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:45.999 [2024-10-21 10:00:22.593689] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:45.999 [2024-10-21 10:00:22.593732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:45.999 [2024-10-21 10:00:22.593865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:45.999 [2024-10-21 10:00:22.593880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:45.999 [2024-10-21 10:00:22.594146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:46.259 [2024-10-21 10:00:22.599424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:46.259 [2024-10-21 10:00:22.599451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:46.259 [2024-10-21 10:00:22.599830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.259 pt3 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.259 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.260 "name": "raid_bdev1", 00:15:46.260 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:46.260 "strip_size_kb": 64, 00:15:46.260 "state": "online", 00:15:46.260 "raid_level": "raid5f", 00:15:46.260 "superblock": true, 00:15:46.260 "num_base_bdevs": 3, 00:15:46.260 "num_base_bdevs_discovered": 2, 00:15:46.260 "num_base_bdevs_operational": 2, 00:15:46.260 "base_bdevs_list": [ 00:15:46.260 { 00:15:46.260 "name": null, 00:15:46.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.260 "is_configured": false, 00:15:46.260 "data_offset": 2048, 00:15:46.260 "data_size": 63488 00:15:46.260 }, 00:15:46.260 { 00:15:46.260 "name": "pt2", 00:15:46.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.260 "is_configured": true, 00:15:46.260 "data_offset": 2048, 00:15:46.260 "data_size": 63488 00:15:46.260 }, 00:15:46.260 { 00:15:46.260 "name": "pt3", 00:15:46.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.260 "is_configured": true, 00:15:46.260 "data_offset": 2048, 00:15:46.260 "data_size": 63488 00:15:46.260 } 00:15:46.260 ] 00:15:46.260 }' 00:15:46.260 10:00:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.260 10:00:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.520 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.520 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.520 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.520 [2024-10-21 10:00:23.042472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.521 [2024-10-21 10:00:23.042505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.521 [2024-10-21 10:00:23.042597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.521 [2024-10-21 10:00:23.042667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.521 [2024-10-21 10:00:23.042677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.521 [2024-10-21 10:00:23.110383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:46.521 [2024-10-21 10:00:23.110435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.521 [2024-10-21 10:00:23.110457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:46.521 [2024-10-21 10:00:23.110468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.521 [2024-10-21 10:00:23.113262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.521 [2024-10-21 10:00:23.113292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:46.521 [2024-10-21 10:00:23.113369] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:46.521 [2024-10-21 10:00:23.113433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:46.521 [2024-10-21 10:00:23.113580] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:46.521 [2024-10-21 10:00:23.113613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.521 [2024-10-21 10:00:23.113632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state configuring 00:15:46.521 [2024-10-21 10:00:23.113712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.521 pt1 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.521 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.782 "name": "raid_bdev1", 00:15:46.782 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:46.782 "strip_size_kb": 64, 00:15:46.782 "state": "configuring", 00:15:46.782 "raid_level": "raid5f", 00:15:46.782 "superblock": true, 00:15:46.782 "num_base_bdevs": 3, 00:15:46.782 "num_base_bdevs_discovered": 1, 00:15:46.782 "num_base_bdevs_operational": 2, 00:15:46.782 "base_bdevs_list": [ 00:15:46.782 { 00:15:46.782 "name": null, 00:15:46.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.782 "is_configured": false, 00:15:46.782 "data_offset": 2048, 00:15:46.782 "data_size": 63488 00:15:46.782 }, 00:15:46.782 { 00:15:46.782 "name": "pt2", 00:15:46.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.782 "is_configured": true, 00:15:46.782 "data_offset": 2048, 00:15:46.782 "data_size": 63488 00:15:46.782 }, 00:15:46.782 { 00:15:46.782 "name": null, 00:15:46.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.782 "is_configured": false, 00:15:46.782 "data_offset": 2048, 00:15:46.782 "data_size": 63488 00:15:46.782 } 00:15:46.782 ] 00:15:46.782 }' 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.782 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.042 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:47.042 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.042 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.042 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:47.042 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.042 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:47.042 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:47.042 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.042 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.043 [2024-10-21 10:00:23.605551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:47.043 [2024-10-21 10:00:23.605624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.043 [2024-10-21 10:00:23.605652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:47.043 [2024-10-21 10:00:23.605662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.043 [2024-10-21 10:00:23.606239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.043 [2024-10-21 10:00:23.606263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:47.043 [2024-10-21 10:00:23.606369] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:47.043 [2024-10-21 10:00:23.606397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:47.043 [2024-10-21 10:00:23.606548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:47.043 [2024-10-21 10:00:23.606558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:47.043 [2024-10-21 10:00:23.606879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:47.043 [2024-10-21 10:00:23.612708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:47.043 [2024-10-21 10:00:23.612737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:47.043 [2024-10-21 10:00:23.612981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.043 pt3 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.043 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.302 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.302 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.302 "name": "raid_bdev1", 00:15:47.302 "uuid": "59e63cbe-a70b-4797-9b23-f302c41a4b47", 00:15:47.302 "strip_size_kb": 64, 00:15:47.302 "state": "online", 00:15:47.302 "raid_level": "raid5f", 00:15:47.302 "superblock": true, 00:15:47.302 "num_base_bdevs": 3, 00:15:47.302 "num_base_bdevs_discovered": 2, 00:15:47.302 "num_base_bdevs_operational": 2, 00:15:47.302 "base_bdevs_list": [ 00:15:47.302 { 00:15:47.302 "name": null, 00:15:47.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.302 "is_configured": false, 00:15:47.302 "data_offset": 2048, 00:15:47.302 "data_size": 63488 00:15:47.302 }, 00:15:47.302 { 00:15:47.302 "name": "pt2", 00:15:47.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.302 "is_configured": true, 00:15:47.302 "data_offset": 2048, 00:15:47.302 "data_size": 63488 00:15:47.302 }, 00:15:47.302 { 00:15:47.302 "name": "pt3", 00:15:47.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.302 "is_configured": true, 00:15:47.302 "data_offset": 2048, 00:15:47.302 "data_size": 63488 00:15:47.302 } 00:15:47.302 ] 00:15:47.302 }' 00:15:47.302 10:00:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.302 10:00:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.562 10:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:47.562 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.562 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.562 10:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:47.562 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.562 10:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:47.562 10:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:47.562 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.562 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:47.822 [2024-10-21 10:00:24.164131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 59e63cbe-a70b-4797-9b23-f302c41a4b47 '!=' 59e63cbe-a70b-4797-9b23-f302c41a4b47 ']' 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80781 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 80781 ']' 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 80781 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80781 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:47.822 killing process with pid 80781 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80781' 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 80781 00:15:47.822 [2024-10-21 10:00:24.236866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.822 [2024-10-21 10:00:24.236977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.822 10:00:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 80781 00:15:47.822 [2024-10-21 10:00:24.237063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.822 [2024-10-21 10:00:24.237077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:48.081 [2024-10-21 10:00:24.567498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.463 10:00:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:49.463 00:15:49.463 real 0m8.006s 00:15:49.463 user 0m12.357s 00:15:49.463 sys 0m1.540s 00:15:49.463 10:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:49.463 10:00:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.463 ************************************ 00:15:49.463 END TEST raid5f_superblock_test 00:15:49.463 ************************************ 00:15:49.463 10:00:25 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:49.463 10:00:25 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:49.463 10:00:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:49.463 10:00:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:49.463 10:00:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.463 ************************************ 00:15:49.463 START TEST raid5f_rebuild_test 00:15:49.463 ************************************ 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81225 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81225 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81225 ']' 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.463 10:00:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.463 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:49.463 Zero copy mechanism will not be used. 00:15:49.463 [2024-10-21 10:00:25.968176] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:15:49.464 [2024-10-21 10:00:25.968314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81225 ] 00:15:49.723 [2024-10-21 10:00:26.123623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.723 [2024-10-21 10:00:26.260283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.984 [2024-10-21 10:00:26.514472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.984 [2024-10-21 10:00:26.514526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.243 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:50.243 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:50.243 10:00:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.243 10:00:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:50.243 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.243 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 BaseBdev1_malloc 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 [2024-10-21 10:00:26.864615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:50.504 [2024-10-21 10:00:26.864705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.504 [2024-10-21 10:00:26.864733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:15:50.504 [2024-10-21 10:00:26.864746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.504 [2024-10-21 10:00:26.867228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.504 [2024-10-21 10:00:26.867263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:50.504 BaseBdev1 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 BaseBdev2_malloc 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 [2024-10-21 10:00:26.928382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:50.504 [2024-10-21 10:00:26.928438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.504 [2024-10-21 10:00:26.928459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:15:50.504 [2024-10-21 10:00:26.928470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.504 [2024-10-21 10:00:26.930937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.504 [2024-10-21 10:00:26.930970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:50.504 BaseBdev2 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 BaseBdev3_malloc 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.504 10:00:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 [2024-10-21 10:00:27.001614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:50.504 [2024-10-21 10:00:27.001677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.504 [2024-10-21 10:00:27.001700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:50.504 [2024-10-21 10:00:27.001712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.504 [2024-10-21 10:00:27.004222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.504 [2024-10-21 10:00:27.004259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:50.504 BaseBdev3 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 spare_malloc 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 spare_delay 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 [2024-10-21 10:00:27.077447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:50.504 [2024-10-21 10:00:27.077510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.504 [2024-10-21 10:00:27.077526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:50.504 [2024-10-21 10:00:27.077537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.504 [2024-10-21 10:00:27.079940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.504 [2024-10-21 10:00:27.079975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:50.504 spare 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.504 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.504 [2024-10-21 10:00:27.089501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.504 [2024-10-21 10:00:27.091598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.504 [2024-10-21 10:00:27.091661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.504 [2024-10-21 10:00:27.091745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:15:50.504 [2024-10-21 10:00:27.091755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:50.504 [2024-10-21 10:00:27.092026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:50.504 [2024-10-21 10:00:27.097979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:15:50.504 [2024-10-21 10:00:27.098005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:15:50.504 [2024-10-21 10:00:27.098202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.765 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.765 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:50.765 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.765 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.765 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.765 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.765 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.765 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.765 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.765 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.766 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.766 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.766 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.766 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.766 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.766 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.766 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.766 "name": "raid_bdev1", 00:15:50.766 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:15:50.766 "strip_size_kb": 64, 00:15:50.766 "state": "online", 00:15:50.766 "raid_level": "raid5f", 00:15:50.766 "superblock": false, 00:15:50.766 "num_base_bdevs": 3, 00:15:50.766 "num_base_bdevs_discovered": 3, 00:15:50.766 "num_base_bdevs_operational": 3, 00:15:50.766 "base_bdevs_list": [ 00:15:50.766 { 00:15:50.766 "name": "BaseBdev1", 00:15:50.766 "uuid": "5baf2994-fea0-5729-965a-bb1d845c6b43", 00:15:50.766 "is_configured": true, 00:15:50.766 "data_offset": 0, 00:15:50.766 "data_size": 65536 00:15:50.766 }, 00:15:50.766 { 00:15:50.766 "name": "BaseBdev2", 00:15:50.766 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:15:50.766 "is_configured": true, 00:15:50.766 "data_offset": 0, 00:15:50.766 "data_size": 65536 00:15:50.766 }, 00:15:50.766 { 00:15:50.766 "name": "BaseBdev3", 00:15:50.766 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:15:50.766 "is_configured": true, 00:15:50.766 "data_offset": 0, 00:15:50.766 "data_size": 65536 00:15:50.766 } 00:15:50.766 ] 00:15:50.766 }' 00:15:50.766 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.766 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:51.025 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.025 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.025 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.025 [2024-10-21 10:00:27.600501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.025 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.285 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:51.285 [2024-10-21 10:00:27.875953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:51.545 /dev/nbd0 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.545 1+0 records in 00:15:51.545 1+0 records out 00:15:51.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380014 s, 10.8 MB/s 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:51.545 10:00:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:51.804 512+0 records in 00:15:51.804 512+0 records out 00:15:51.804 67108864 bytes (67 MB, 64 MiB) copied, 0.410315 s, 164 MB/s 00:15:51.804 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:51.804 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.804 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:51.804 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.804 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:51.804 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.804 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.063 [2024-10-21 10:00:28.590320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.063 [2024-10-21 10:00:28.606722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.063 10:00:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.064 10:00:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.323 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.323 "name": "raid_bdev1", 00:15:52.323 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:15:52.323 "strip_size_kb": 64, 00:15:52.323 "state": "online", 00:15:52.323 "raid_level": "raid5f", 00:15:52.323 "superblock": false, 00:15:52.323 "num_base_bdevs": 3, 00:15:52.323 "num_base_bdevs_discovered": 2, 00:15:52.323 "num_base_bdevs_operational": 2, 00:15:52.323 "base_bdevs_list": [ 00:15:52.323 { 00:15:52.323 "name": null, 00:15:52.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.323 "is_configured": false, 00:15:52.323 "data_offset": 0, 00:15:52.323 "data_size": 65536 00:15:52.323 }, 00:15:52.323 { 00:15:52.323 "name": "BaseBdev2", 00:15:52.323 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:15:52.323 "is_configured": true, 00:15:52.323 "data_offset": 0, 00:15:52.323 "data_size": 65536 00:15:52.323 }, 00:15:52.323 { 00:15:52.323 "name": "BaseBdev3", 00:15:52.323 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:15:52.323 "is_configured": true, 00:15:52.323 "data_offset": 0, 00:15:52.323 "data_size": 65536 00:15:52.323 } 00:15:52.323 ] 00:15:52.323 }' 00:15:52.323 10:00:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.323 10:00:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.582 10:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.582 10:00:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.582 10:00:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.582 [2024-10-21 10:00:29.105904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.582 [2024-10-21 10:00:29.126247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b410 00:15:52.582 10:00:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.582 10:00:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:52.582 [2024-10-21 10:00:29.134752] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.969 "name": "raid_bdev1", 00:15:53.969 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:15:53.969 "strip_size_kb": 64, 00:15:53.969 "state": "online", 00:15:53.969 "raid_level": "raid5f", 00:15:53.969 "superblock": false, 00:15:53.969 "num_base_bdevs": 3, 00:15:53.969 "num_base_bdevs_discovered": 3, 00:15:53.969 "num_base_bdevs_operational": 3, 00:15:53.969 "process": { 00:15:53.969 "type": "rebuild", 00:15:53.969 "target": "spare", 00:15:53.969 "progress": { 00:15:53.969 "blocks": 20480, 00:15:53.969 "percent": 15 00:15:53.969 } 00:15:53.969 }, 00:15:53.969 "base_bdevs_list": [ 00:15:53.969 { 00:15:53.969 "name": "spare", 00:15:53.969 "uuid": "fc9ba8d2-0712-54ca-af30-8a8d5e55a663", 00:15:53.969 "is_configured": true, 00:15:53.969 "data_offset": 0, 00:15:53.969 "data_size": 65536 00:15:53.969 }, 00:15:53.969 { 00:15:53.969 "name": "BaseBdev2", 00:15:53.969 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:15:53.969 "is_configured": true, 00:15:53.969 "data_offset": 0, 00:15:53.969 "data_size": 65536 00:15:53.969 }, 00:15:53.969 { 00:15:53.969 "name": "BaseBdev3", 00:15:53.969 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:15:53.969 "is_configured": true, 00:15:53.969 "data_offset": 0, 00:15:53.969 "data_size": 65536 00:15:53.969 } 00:15:53.969 ] 00:15:53.969 }' 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.969 [2024-10-21 10:00:30.290358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.969 [2024-10-21 10:00:30.347561] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:53.969 [2024-10-21 10:00:30.347629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.969 [2024-10-21 10:00:30.347665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.969 [2024-10-21 10:00:30.347674] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.969 "name": "raid_bdev1", 00:15:53.969 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:15:53.969 "strip_size_kb": 64, 00:15:53.969 "state": "online", 00:15:53.969 "raid_level": "raid5f", 00:15:53.969 "superblock": false, 00:15:53.969 "num_base_bdevs": 3, 00:15:53.969 "num_base_bdevs_discovered": 2, 00:15:53.969 "num_base_bdevs_operational": 2, 00:15:53.969 "base_bdevs_list": [ 00:15:53.969 { 00:15:53.969 "name": null, 00:15:53.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.969 "is_configured": false, 00:15:53.969 "data_offset": 0, 00:15:53.969 "data_size": 65536 00:15:53.969 }, 00:15:53.969 { 00:15:53.969 "name": "BaseBdev2", 00:15:53.969 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:15:53.969 "is_configured": true, 00:15:53.969 "data_offset": 0, 00:15:53.969 "data_size": 65536 00:15:53.969 }, 00:15:53.969 { 00:15:53.969 "name": "BaseBdev3", 00:15:53.969 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:15:53.969 "is_configured": true, 00:15:53.969 "data_offset": 0, 00:15:53.969 "data_size": 65536 00:15:53.969 } 00:15:53.969 ] 00:15:53.969 }' 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.969 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.540 "name": "raid_bdev1", 00:15:54.540 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:15:54.540 "strip_size_kb": 64, 00:15:54.540 "state": "online", 00:15:54.540 "raid_level": "raid5f", 00:15:54.540 "superblock": false, 00:15:54.540 "num_base_bdevs": 3, 00:15:54.540 "num_base_bdevs_discovered": 2, 00:15:54.540 "num_base_bdevs_operational": 2, 00:15:54.540 "base_bdevs_list": [ 00:15:54.540 { 00:15:54.540 "name": null, 00:15:54.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.540 "is_configured": false, 00:15:54.540 "data_offset": 0, 00:15:54.540 "data_size": 65536 00:15:54.540 }, 00:15:54.540 { 00:15:54.540 "name": "BaseBdev2", 00:15:54.540 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:15:54.540 "is_configured": true, 00:15:54.540 "data_offset": 0, 00:15:54.540 "data_size": 65536 00:15:54.540 }, 00:15:54.540 { 00:15:54.540 "name": "BaseBdev3", 00:15:54.540 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:15:54.540 "is_configured": true, 00:15:54.540 "data_offset": 0, 00:15:54.540 "data_size": 65536 00:15:54.540 } 00:15:54.540 ] 00:15:54.540 }' 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.540 10:00:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.540 [2024-10-21 10:00:30.989113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:54.540 [2024-10-21 10:00:31.007661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:15:54.540 10:00:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.540 10:00:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:54.540 [2024-10-21 10:00:31.016720] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.479 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.479 "name": "raid_bdev1", 00:15:55.479 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:15:55.479 "strip_size_kb": 64, 00:15:55.479 "state": "online", 00:15:55.479 "raid_level": "raid5f", 00:15:55.479 "superblock": false, 00:15:55.479 "num_base_bdevs": 3, 00:15:55.479 "num_base_bdevs_discovered": 3, 00:15:55.479 "num_base_bdevs_operational": 3, 00:15:55.479 "process": { 00:15:55.479 "type": "rebuild", 00:15:55.479 "target": "spare", 00:15:55.479 "progress": { 00:15:55.479 "blocks": 20480, 00:15:55.479 "percent": 15 00:15:55.479 } 00:15:55.479 }, 00:15:55.479 "base_bdevs_list": [ 00:15:55.479 { 00:15:55.479 "name": "spare", 00:15:55.479 "uuid": "fc9ba8d2-0712-54ca-af30-8a8d5e55a663", 00:15:55.479 "is_configured": true, 00:15:55.479 "data_offset": 0, 00:15:55.479 "data_size": 65536 00:15:55.479 }, 00:15:55.479 { 00:15:55.480 "name": "BaseBdev2", 00:15:55.480 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:15:55.480 "is_configured": true, 00:15:55.480 "data_offset": 0, 00:15:55.480 "data_size": 65536 00:15:55.480 }, 00:15:55.480 { 00:15:55.480 "name": "BaseBdev3", 00:15:55.480 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:15:55.480 "is_configured": true, 00:15:55.480 "data_offset": 0, 00:15:55.480 "data_size": 65536 00:15:55.480 } 00:15:55.480 ] 00:15:55.480 }' 00:15:55.480 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=559 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.739 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.740 "name": "raid_bdev1", 00:15:55.740 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:15:55.740 "strip_size_kb": 64, 00:15:55.740 "state": "online", 00:15:55.740 "raid_level": "raid5f", 00:15:55.740 "superblock": false, 00:15:55.740 "num_base_bdevs": 3, 00:15:55.740 "num_base_bdevs_discovered": 3, 00:15:55.740 "num_base_bdevs_operational": 3, 00:15:55.740 "process": { 00:15:55.740 "type": "rebuild", 00:15:55.740 "target": "spare", 00:15:55.740 "progress": { 00:15:55.740 "blocks": 22528, 00:15:55.740 "percent": 17 00:15:55.740 } 00:15:55.740 }, 00:15:55.740 "base_bdevs_list": [ 00:15:55.740 { 00:15:55.740 "name": "spare", 00:15:55.740 "uuid": "fc9ba8d2-0712-54ca-af30-8a8d5e55a663", 00:15:55.740 "is_configured": true, 00:15:55.740 "data_offset": 0, 00:15:55.740 "data_size": 65536 00:15:55.740 }, 00:15:55.740 { 00:15:55.740 "name": "BaseBdev2", 00:15:55.740 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:15:55.740 "is_configured": true, 00:15:55.740 "data_offset": 0, 00:15:55.740 "data_size": 65536 00:15:55.740 }, 00:15:55.740 { 00:15:55.740 "name": "BaseBdev3", 00:15:55.740 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:15:55.740 "is_configured": true, 00:15:55.740 "data_offset": 0, 00:15:55.740 "data_size": 65536 00:15:55.740 } 00:15:55.740 ] 00:15:55.740 }' 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.740 10:00:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.122 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.122 "name": "raid_bdev1", 00:15:57.122 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:15:57.122 "strip_size_kb": 64, 00:15:57.122 "state": "online", 00:15:57.122 "raid_level": "raid5f", 00:15:57.122 "superblock": false, 00:15:57.122 "num_base_bdevs": 3, 00:15:57.122 "num_base_bdevs_discovered": 3, 00:15:57.122 "num_base_bdevs_operational": 3, 00:15:57.122 "process": { 00:15:57.122 "type": "rebuild", 00:15:57.122 "target": "spare", 00:15:57.122 "progress": { 00:15:57.122 "blocks": 45056, 00:15:57.122 "percent": 34 00:15:57.122 } 00:15:57.122 }, 00:15:57.123 "base_bdevs_list": [ 00:15:57.123 { 00:15:57.123 "name": "spare", 00:15:57.123 "uuid": "fc9ba8d2-0712-54ca-af30-8a8d5e55a663", 00:15:57.123 "is_configured": true, 00:15:57.123 "data_offset": 0, 00:15:57.123 "data_size": 65536 00:15:57.123 }, 00:15:57.123 { 00:15:57.123 "name": "BaseBdev2", 00:15:57.123 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:15:57.123 "is_configured": true, 00:15:57.123 "data_offset": 0, 00:15:57.123 "data_size": 65536 00:15:57.123 }, 00:15:57.123 { 00:15:57.123 "name": "BaseBdev3", 00:15:57.123 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:15:57.123 "is_configured": true, 00:15:57.123 "data_offset": 0, 00:15:57.123 "data_size": 65536 00:15:57.123 } 00:15:57.123 ] 00:15:57.123 }' 00:15:57.123 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.123 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.123 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.123 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.123 10:00:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.063 "name": "raid_bdev1", 00:15:58.063 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:15:58.063 "strip_size_kb": 64, 00:15:58.063 "state": "online", 00:15:58.063 "raid_level": "raid5f", 00:15:58.063 "superblock": false, 00:15:58.063 "num_base_bdevs": 3, 00:15:58.063 "num_base_bdevs_discovered": 3, 00:15:58.063 "num_base_bdevs_operational": 3, 00:15:58.063 "process": { 00:15:58.063 "type": "rebuild", 00:15:58.063 "target": "spare", 00:15:58.063 "progress": { 00:15:58.063 "blocks": 67584, 00:15:58.063 "percent": 51 00:15:58.063 } 00:15:58.063 }, 00:15:58.063 "base_bdevs_list": [ 00:15:58.063 { 00:15:58.063 "name": "spare", 00:15:58.063 "uuid": "fc9ba8d2-0712-54ca-af30-8a8d5e55a663", 00:15:58.063 "is_configured": true, 00:15:58.063 "data_offset": 0, 00:15:58.063 "data_size": 65536 00:15:58.063 }, 00:15:58.063 { 00:15:58.063 "name": "BaseBdev2", 00:15:58.063 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:15:58.063 "is_configured": true, 00:15:58.063 "data_offset": 0, 00:15:58.063 "data_size": 65536 00:15:58.063 }, 00:15:58.063 { 00:15:58.063 "name": "BaseBdev3", 00:15:58.063 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:15:58.063 "is_configured": true, 00:15:58.063 "data_offset": 0, 00:15:58.063 "data_size": 65536 00:15:58.063 } 00:15:58.063 ] 00:15:58.063 }' 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.063 10:00:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.003 10:00:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.263 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.263 "name": "raid_bdev1", 00:15:59.263 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:15:59.263 "strip_size_kb": 64, 00:15:59.263 "state": "online", 00:15:59.263 "raid_level": "raid5f", 00:15:59.263 "superblock": false, 00:15:59.263 "num_base_bdevs": 3, 00:15:59.263 "num_base_bdevs_discovered": 3, 00:15:59.263 "num_base_bdevs_operational": 3, 00:15:59.263 "process": { 00:15:59.263 "type": "rebuild", 00:15:59.263 "target": "spare", 00:15:59.263 "progress": { 00:15:59.263 "blocks": 92160, 00:15:59.263 "percent": 70 00:15:59.263 } 00:15:59.263 }, 00:15:59.263 "base_bdevs_list": [ 00:15:59.263 { 00:15:59.263 "name": "spare", 00:15:59.263 "uuid": "fc9ba8d2-0712-54ca-af30-8a8d5e55a663", 00:15:59.263 "is_configured": true, 00:15:59.263 "data_offset": 0, 00:15:59.263 "data_size": 65536 00:15:59.263 }, 00:15:59.263 { 00:15:59.263 "name": "BaseBdev2", 00:15:59.263 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:15:59.263 "is_configured": true, 00:15:59.263 "data_offset": 0, 00:15:59.263 "data_size": 65536 00:15:59.263 }, 00:15:59.263 { 00:15:59.263 "name": "BaseBdev3", 00:15:59.263 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:15:59.263 "is_configured": true, 00:15:59.263 "data_offset": 0, 00:15:59.263 "data_size": 65536 00:15:59.263 } 00:15:59.263 ] 00:15:59.263 }' 00:15:59.263 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.263 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.263 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.263 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.263 10:00:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.202 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.202 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.202 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.203 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.203 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.203 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.203 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.203 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.203 10:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.203 10:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.203 10:00:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.203 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.203 "name": "raid_bdev1", 00:16:00.203 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:16:00.203 "strip_size_kb": 64, 00:16:00.203 "state": "online", 00:16:00.203 "raid_level": "raid5f", 00:16:00.203 "superblock": false, 00:16:00.203 "num_base_bdevs": 3, 00:16:00.203 "num_base_bdevs_discovered": 3, 00:16:00.203 "num_base_bdevs_operational": 3, 00:16:00.203 "process": { 00:16:00.203 "type": "rebuild", 00:16:00.203 "target": "spare", 00:16:00.203 "progress": { 00:16:00.203 "blocks": 114688, 00:16:00.203 "percent": 87 00:16:00.203 } 00:16:00.203 }, 00:16:00.203 "base_bdevs_list": [ 00:16:00.203 { 00:16:00.203 "name": "spare", 00:16:00.203 "uuid": "fc9ba8d2-0712-54ca-af30-8a8d5e55a663", 00:16:00.203 "is_configured": true, 00:16:00.203 "data_offset": 0, 00:16:00.203 "data_size": 65536 00:16:00.203 }, 00:16:00.203 { 00:16:00.203 "name": "BaseBdev2", 00:16:00.203 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:16:00.203 "is_configured": true, 00:16:00.203 "data_offset": 0, 00:16:00.203 "data_size": 65536 00:16:00.203 }, 00:16:00.203 { 00:16:00.203 "name": "BaseBdev3", 00:16:00.203 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:16:00.203 "is_configured": true, 00:16:00.203 "data_offset": 0, 00:16:00.203 "data_size": 65536 00:16:00.203 } 00:16:00.203 ] 00:16:00.203 }' 00:16:00.203 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.462 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.462 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.462 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.462 10:00:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.032 [2024-10-21 10:00:37.477345] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:01.032 [2024-10-21 10:00:37.477573] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:01.032 [2024-10-21 10:00:37.477643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.291 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.291 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.291 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.291 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.291 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.291 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.291 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.292 10:00:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.292 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.292 10:00:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.550 10:00:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.550 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.550 "name": "raid_bdev1", 00:16:01.550 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:16:01.550 "strip_size_kb": 64, 00:16:01.550 "state": "online", 00:16:01.550 "raid_level": "raid5f", 00:16:01.550 "superblock": false, 00:16:01.550 "num_base_bdevs": 3, 00:16:01.550 "num_base_bdevs_discovered": 3, 00:16:01.550 "num_base_bdevs_operational": 3, 00:16:01.550 "base_bdevs_list": [ 00:16:01.550 { 00:16:01.550 "name": "spare", 00:16:01.550 "uuid": "fc9ba8d2-0712-54ca-af30-8a8d5e55a663", 00:16:01.550 "is_configured": true, 00:16:01.550 "data_offset": 0, 00:16:01.550 "data_size": 65536 00:16:01.550 }, 00:16:01.550 { 00:16:01.550 "name": "BaseBdev2", 00:16:01.550 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:16:01.550 "is_configured": true, 00:16:01.550 "data_offset": 0, 00:16:01.550 "data_size": 65536 00:16:01.550 }, 00:16:01.550 { 00:16:01.550 "name": "BaseBdev3", 00:16:01.550 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:16:01.550 "is_configured": true, 00:16:01.550 "data_offset": 0, 00:16:01.550 "data_size": 65536 00:16:01.550 } 00:16:01.550 ] 00:16:01.550 }' 00:16:01.550 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.550 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:01.550 10:00:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.550 "name": "raid_bdev1", 00:16:01.550 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:16:01.550 "strip_size_kb": 64, 00:16:01.550 "state": "online", 00:16:01.550 "raid_level": "raid5f", 00:16:01.550 "superblock": false, 00:16:01.550 "num_base_bdevs": 3, 00:16:01.550 "num_base_bdevs_discovered": 3, 00:16:01.550 "num_base_bdevs_operational": 3, 00:16:01.550 "base_bdevs_list": [ 00:16:01.550 { 00:16:01.550 "name": "spare", 00:16:01.550 "uuid": "fc9ba8d2-0712-54ca-af30-8a8d5e55a663", 00:16:01.550 "is_configured": true, 00:16:01.550 "data_offset": 0, 00:16:01.550 "data_size": 65536 00:16:01.550 }, 00:16:01.550 { 00:16:01.550 "name": "BaseBdev2", 00:16:01.550 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:16:01.550 "is_configured": true, 00:16:01.550 "data_offset": 0, 00:16:01.550 "data_size": 65536 00:16:01.550 }, 00:16:01.550 { 00:16:01.550 "name": "BaseBdev3", 00:16:01.550 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:16:01.550 "is_configured": true, 00:16:01.550 "data_offset": 0, 00:16:01.550 "data_size": 65536 00:16:01.550 } 00:16:01.550 ] 00:16:01.550 }' 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:01.550 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.810 "name": "raid_bdev1", 00:16:01.810 "uuid": "f7544f5f-003d-498f-9d49-53a42c84a3be", 00:16:01.810 "strip_size_kb": 64, 00:16:01.810 "state": "online", 00:16:01.810 "raid_level": "raid5f", 00:16:01.810 "superblock": false, 00:16:01.810 "num_base_bdevs": 3, 00:16:01.810 "num_base_bdevs_discovered": 3, 00:16:01.810 "num_base_bdevs_operational": 3, 00:16:01.810 "base_bdevs_list": [ 00:16:01.810 { 00:16:01.810 "name": "spare", 00:16:01.810 "uuid": "fc9ba8d2-0712-54ca-af30-8a8d5e55a663", 00:16:01.810 "is_configured": true, 00:16:01.810 "data_offset": 0, 00:16:01.810 "data_size": 65536 00:16:01.810 }, 00:16:01.810 { 00:16:01.810 "name": "BaseBdev2", 00:16:01.810 "uuid": "a0b45f41-5a49-5051-ab69-565baddb178a", 00:16:01.810 "is_configured": true, 00:16:01.810 "data_offset": 0, 00:16:01.810 "data_size": 65536 00:16:01.810 }, 00:16:01.810 { 00:16:01.810 "name": "BaseBdev3", 00:16:01.810 "uuid": "344e7e05-ead2-539e-abb2-a0b75be811b2", 00:16:01.810 "is_configured": true, 00:16:01.810 "data_offset": 0, 00:16:01.810 "data_size": 65536 00:16:01.810 } 00:16:01.810 ] 00:16:01.810 }' 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.810 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.077 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.077 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.077 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.077 [2024-10-21 10:00:38.641159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.077 [2024-10-21 10:00:38.641203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.077 [2024-10-21 10:00:38.641361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.077 [2024-10-21 10:00:38.641475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.077 [2024-10-21 10:00:38.641501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:16:02.077 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.077 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.077 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:02.077 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.077 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.077 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:02.345 /dev/nbd0 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:02.345 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.345 1+0 records in 00:16:02.345 1+0 records out 00:16:02.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329512 s, 12.4 MB/s 00:16:02.607 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.607 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:02.607 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.607 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:02.607 10:00:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:02.607 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.607 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.607 10:00:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:02.607 /dev/nbd1 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.607 1+0 records in 00:16:02.607 1+0 records out 00:16:02.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025146 s, 16.3 MB/s 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:02.607 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.867 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:03.127 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:03.127 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:03.127 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:03.127 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.127 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.127 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:03.127 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:03.127 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.127 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.128 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81225 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81225 ']' 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81225 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81225 00:16:03.388 killing process with pid 81225 00:16:03.388 Received shutdown signal, test time was about 60.000000 seconds 00:16:03.388 00:16:03.388 Latency(us) 00:16:03.388 [2024-10-21T10:00:39.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.388 [2024-10-21T10:00:39.983Z] =================================================================================================================== 00:16:03.388 [2024-10-21T10:00:39.983Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81225' 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81225 00:16:03.388 [2024-10-21 10:00:39.890140] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.388 10:00:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81225 00:16:03.958 [2024-10-21 10:00:40.317812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:05.340 00:16:05.340 real 0m15.659s 00:16:05.340 user 0m19.055s 00:16:05.340 sys 0m2.294s 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.340 ************************************ 00:16:05.340 END TEST raid5f_rebuild_test 00:16:05.340 ************************************ 00:16:05.340 10:00:41 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:05.340 10:00:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:05.340 10:00:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.340 10:00:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:05.340 ************************************ 00:16:05.340 START TEST raid5f_rebuild_test_sb 00:16:05.340 ************************************ 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81667 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81667 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81667 ']' 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.340 10:00:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.340 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:05.340 Zero copy mechanism will not be used. 00:16:05.340 [2024-10-21 10:00:41.689855] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:16:05.340 [2024-10-21 10:00:41.689964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81667 ] 00:16:05.340 [2024-10-21 10:00:41.852805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.600 [2024-10-21 10:00:41.997896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.860 [2024-10-21 10:00:42.254884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.860 [2024-10-21 10:00:42.254979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.121 BaseBdev1_malloc 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.121 [2024-10-21 10:00:42.581334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:06.121 [2024-10-21 10:00:42.581417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.121 [2024-10-21 10:00:42.581444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:06.121 [2024-10-21 10:00:42.581456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.121 [2024-10-21 10:00:42.583910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.121 [2024-10-21 10:00:42.583952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:06.121 BaseBdev1 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.121 BaseBdev2_malloc 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.121 [2024-10-21 10:00:42.645050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:06.121 [2024-10-21 10:00:42.645129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.121 [2024-10-21 10:00:42.645151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:16:06.121 [2024-10-21 10:00:42.645163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.121 [2024-10-21 10:00:42.647546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.121 [2024-10-21 10:00:42.647597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:06.121 BaseBdev2 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.121 BaseBdev3_malloc 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.121 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.381 [2024-10-21 10:00:42.720285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:06.381 [2024-10-21 10:00:42.720377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.381 [2024-10-21 10:00:42.720408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:06.381 [2024-10-21 10:00:42.720422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.381 [2024-10-21 10:00:42.723049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.381 [2024-10-21 10:00:42.723100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:06.382 BaseBdev3 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.382 spare_malloc 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.382 spare_delay 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.382 [2024-10-21 10:00:42.793989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.382 [2024-10-21 10:00:42.794077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.382 [2024-10-21 10:00:42.794104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:06.382 [2024-10-21 10:00:42.794116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.382 [2024-10-21 10:00:42.796808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.382 [2024-10-21 10:00:42.796858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.382 spare 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.382 [2024-10-21 10:00:42.806022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.382 [2024-10-21 10:00:42.808176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.382 [2024-10-21 10:00:42.808246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.382 [2024-10-21 10:00:42.808450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:16:06.382 [2024-10-21 10:00:42.808470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:06.382 [2024-10-21 10:00:42.808774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:06.382 [2024-10-21 10:00:42.814648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:16:06.382 [2024-10-21 10:00:42.814675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:16:06.382 [2024-10-21 10:00:42.814902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.382 "name": "raid_bdev1", 00:16:06.382 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:06.382 "strip_size_kb": 64, 00:16:06.382 "state": "online", 00:16:06.382 "raid_level": "raid5f", 00:16:06.382 "superblock": true, 00:16:06.382 "num_base_bdevs": 3, 00:16:06.382 "num_base_bdevs_discovered": 3, 00:16:06.382 "num_base_bdevs_operational": 3, 00:16:06.382 "base_bdevs_list": [ 00:16:06.382 { 00:16:06.382 "name": "BaseBdev1", 00:16:06.382 "uuid": "2056eb01-30a1-5a5e-b659-4ca3e7884fc2", 00:16:06.382 "is_configured": true, 00:16:06.382 "data_offset": 2048, 00:16:06.382 "data_size": 63488 00:16:06.382 }, 00:16:06.382 { 00:16:06.382 "name": "BaseBdev2", 00:16:06.382 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:06.382 "is_configured": true, 00:16:06.382 "data_offset": 2048, 00:16:06.382 "data_size": 63488 00:16:06.382 }, 00:16:06.382 { 00:16:06.382 "name": "BaseBdev3", 00:16:06.382 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:06.382 "is_configured": true, 00:16:06.382 "data_offset": 2048, 00:16:06.382 "data_size": 63488 00:16:06.382 } 00:16:06.382 ] 00:16:06.382 }' 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.382 10:00:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.642 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:06.642 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.642 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.642 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:06.642 [2024-10-21 10:00:43.217541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.642 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.903 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:06.903 [2024-10-21 10:00:43.492896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:07.163 /dev/nbd0 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:07.163 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.163 1+0 records in 00:16:07.163 1+0 records out 00:16:07.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375231 s, 10.9 MB/s 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:07.164 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:07.424 496+0 records in 00:16:07.424 496+0 records out 00:16:07.424 65011712 bytes (65 MB, 62 MiB) copied, 0.343264 s, 189 MB/s 00:16:07.424 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:07.424 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.424 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:07.424 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.424 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:07.424 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.424 10:00:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.684 [2024-10-21 10:00:44.125881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.684 [2024-10-21 10:00:44.146398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.684 "name": "raid_bdev1", 00:16:07.684 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:07.684 "strip_size_kb": 64, 00:16:07.684 "state": "online", 00:16:07.684 "raid_level": "raid5f", 00:16:07.684 "superblock": true, 00:16:07.684 "num_base_bdevs": 3, 00:16:07.684 "num_base_bdevs_discovered": 2, 00:16:07.684 "num_base_bdevs_operational": 2, 00:16:07.684 "base_bdevs_list": [ 00:16:07.684 { 00:16:07.684 "name": null, 00:16:07.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.684 "is_configured": false, 00:16:07.684 "data_offset": 0, 00:16:07.684 "data_size": 63488 00:16:07.684 }, 00:16:07.684 { 00:16:07.684 "name": "BaseBdev2", 00:16:07.684 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:07.684 "is_configured": true, 00:16:07.684 "data_offset": 2048, 00:16:07.684 "data_size": 63488 00:16:07.684 }, 00:16:07.684 { 00:16:07.684 "name": "BaseBdev3", 00:16:07.684 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:07.684 "is_configured": true, 00:16:07.684 "data_offset": 2048, 00:16:07.684 "data_size": 63488 00:16:07.684 } 00:16:07.684 ] 00:16:07.684 }' 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.684 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.254 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:08.254 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.254 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.254 [2024-10-21 10:00:44.621720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.254 [2024-10-21 10:00:44.641801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028d10 00:16:08.254 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.254 10:00:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:08.254 [2024-10-21 10:00:44.650932] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.195 "name": "raid_bdev1", 00:16:09.195 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:09.195 "strip_size_kb": 64, 00:16:09.195 "state": "online", 00:16:09.195 "raid_level": "raid5f", 00:16:09.195 "superblock": true, 00:16:09.195 "num_base_bdevs": 3, 00:16:09.195 "num_base_bdevs_discovered": 3, 00:16:09.195 "num_base_bdevs_operational": 3, 00:16:09.195 "process": { 00:16:09.195 "type": "rebuild", 00:16:09.195 "target": "spare", 00:16:09.195 "progress": { 00:16:09.195 "blocks": 20480, 00:16:09.195 "percent": 16 00:16:09.195 } 00:16:09.195 }, 00:16:09.195 "base_bdevs_list": [ 00:16:09.195 { 00:16:09.195 "name": "spare", 00:16:09.195 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:09.195 "is_configured": true, 00:16:09.195 "data_offset": 2048, 00:16:09.195 "data_size": 63488 00:16:09.195 }, 00:16:09.195 { 00:16:09.195 "name": "BaseBdev2", 00:16:09.195 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:09.195 "is_configured": true, 00:16:09.195 "data_offset": 2048, 00:16:09.195 "data_size": 63488 00:16:09.195 }, 00:16:09.195 { 00:16:09.195 "name": "BaseBdev3", 00:16:09.195 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:09.195 "is_configured": true, 00:16:09.195 "data_offset": 2048, 00:16:09.195 "data_size": 63488 00:16:09.195 } 00:16:09.195 ] 00:16:09.195 }' 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.195 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.456 [2024-10-21 10:00:45.810477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.456 [2024-10-21 10:00:45.862249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.456 [2024-10-21 10:00:45.862348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.456 [2024-10-21 10:00:45.862378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.456 [2024-10-21 10:00:45.862387] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.456 "name": "raid_bdev1", 00:16:09.456 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:09.456 "strip_size_kb": 64, 00:16:09.456 "state": "online", 00:16:09.456 "raid_level": "raid5f", 00:16:09.456 "superblock": true, 00:16:09.456 "num_base_bdevs": 3, 00:16:09.456 "num_base_bdevs_discovered": 2, 00:16:09.456 "num_base_bdevs_operational": 2, 00:16:09.456 "base_bdevs_list": [ 00:16:09.456 { 00:16:09.456 "name": null, 00:16:09.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.456 "is_configured": false, 00:16:09.456 "data_offset": 0, 00:16:09.456 "data_size": 63488 00:16:09.456 }, 00:16:09.456 { 00:16:09.456 "name": "BaseBdev2", 00:16:09.456 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:09.456 "is_configured": true, 00:16:09.456 "data_offset": 2048, 00:16:09.456 "data_size": 63488 00:16:09.456 }, 00:16:09.456 { 00:16:09.456 "name": "BaseBdev3", 00:16:09.456 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:09.456 "is_configured": true, 00:16:09.456 "data_offset": 2048, 00:16:09.456 "data_size": 63488 00:16:09.456 } 00:16:09.456 ] 00:16:09.456 }' 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.456 10:00:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.026 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.026 "name": "raid_bdev1", 00:16:10.026 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:10.026 "strip_size_kb": 64, 00:16:10.026 "state": "online", 00:16:10.026 "raid_level": "raid5f", 00:16:10.026 "superblock": true, 00:16:10.026 "num_base_bdevs": 3, 00:16:10.026 "num_base_bdevs_discovered": 2, 00:16:10.026 "num_base_bdevs_operational": 2, 00:16:10.026 "base_bdevs_list": [ 00:16:10.026 { 00:16:10.026 "name": null, 00:16:10.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.026 "is_configured": false, 00:16:10.027 "data_offset": 0, 00:16:10.027 "data_size": 63488 00:16:10.027 }, 00:16:10.027 { 00:16:10.027 "name": "BaseBdev2", 00:16:10.027 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:10.027 "is_configured": true, 00:16:10.027 "data_offset": 2048, 00:16:10.027 "data_size": 63488 00:16:10.027 }, 00:16:10.027 { 00:16:10.027 "name": "BaseBdev3", 00:16:10.027 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:10.027 "is_configured": true, 00:16:10.027 "data_offset": 2048, 00:16:10.027 "data_size": 63488 00:16:10.027 } 00:16:10.027 ] 00:16:10.027 }' 00:16:10.027 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.027 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.027 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.027 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.027 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.027 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.027 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.027 [2024-10-21 10:00:46.495872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.027 [2024-10-21 10:00:46.514711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:16:10.027 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.027 10:00:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:10.027 [2024-10-21 10:00:46.522767] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.989 "name": "raid_bdev1", 00:16:10.989 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:10.989 "strip_size_kb": 64, 00:16:10.989 "state": "online", 00:16:10.989 "raid_level": "raid5f", 00:16:10.989 "superblock": true, 00:16:10.989 "num_base_bdevs": 3, 00:16:10.989 "num_base_bdevs_discovered": 3, 00:16:10.989 "num_base_bdevs_operational": 3, 00:16:10.989 "process": { 00:16:10.989 "type": "rebuild", 00:16:10.989 "target": "spare", 00:16:10.989 "progress": { 00:16:10.989 "blocks": 20480, 00:16:10.989 "percent": 16 00:16:10.989 } 00:16:10.989 }, 00:16:10.989 "base_bdevs_list": [ 00:16:10.989 { 00:16:10.989 "name": "spare", 00:16:10.989 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:10.989 "is_configured": true, 00:16:10.989 "data_offset": 2048, 00:16:10.989 "data_size": 63488 00:16:10.989 }, 00:16:10.989 { 00:16:10.989 "name": "BaseBdev2", 00:16:10.989 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:10.989 "is_configured": true, 00:16:10.989 "data_offset": 2048, 00:16:10.989 "data_size": 63488 00:16:10.989 }, 00:16:10.989 { 00:16:10.989 "name": "BaseBdev3", 00:16:10.989 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:10.989 "is_configured": true, 00:16:10.989 "data_offset": 2048, 00:16:10.989 "data_size": 63488 00:16:10.989 } 00:16:10.989 ] 00:16:10.989 }' 00:16:10.989 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:11.249 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=574 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.249 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.249 "name": "raid_bdev1", 00:16:11.250 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:11.250 "strip_size_kb": 64, 00:16:11.250 "state": "online", 00:16:11.250 "raid_level": "raid5f", 00:16:11.250 "superblock": true, 00:16:11.250 "num_base_bdevs": 3, 00:16:11.250 "num_base_bdevs_discovered": 3, 00:16:11.250 "num_base_bdevs_operational": 3, 00:16:11.250 "process": { 00:16:11.250 "type": "rebuild", 00:16:11.250 "target": "spare", 00:16:11.250 "progress": { 00:16:11.250 "blocks": 22528, 00:16:11.250 "percent": 17 00:16:11.250 } 00:16:11.250 }, 00:16:11.250 "base_bdevs_list": [ 00:16:11.250 { 00:16:11.250 "name": "spare", 00:16:11.250 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:11.250 "is_configured": true, 00:16:11.250 "data_offset": 2048, 00:16:11.250 "data_size": 63488 00:16:11.250 }, 00:16:11.250 { 00:16:11.250 "name": "BaseBdev2", 00:16:11.250 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:11.250 "is_configured": true, 00:16:11.250 "data_offset": 2048, 00:16:11.250 "data_size": 63488 00:16:11.250 }, 00:16:11.250 { 00:16:11.250 "name": "BaseBdev3", 00:16:11.250 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:11.250 "is_configured": true, 00:16:11.250 "data_offset": 2048, 00:16:11.250 "data_size": 63488 00:16:11.250 } 00:16:11.250 ] 00:16:11.250 }' 00:16:11.250 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.250 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.250 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.250 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.250 10:00:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.630 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.630 "name": "raid_bdev1", 00:16:12.630 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:12.630 "strip_size_kb": 64, 00:16:12.630 "state": "online", 00:16:12.630 "raid_level": "raid5f", 00:16:12.630 "superblock": true, 00:16:12.630 "num_base_bdevs": 3, 00:16:12.630 "num_base_bdevs_discovered": 3, 00:16:12.630 "num_base_bdevs_operational": 3, 00:16:12.630 "process": { 00:16:12.630 "type": "rebuild", 00:16:12.630 "target": "spare", 00:16:12.630 "progress": { 00:16:12.630 "blocks": 45056, 00:16:12.630 "percent": 35 00:16:12.630 } 00:16:12.630 }, 00:16:12.630 "base_bdevs_list": [ 00:16:12.630 { 00:16:12.631 "name": "spare", 00:16:12.631 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:12.631 "is_configured": true, 00:16:12.631 "data_offset": 2048, 00:16:12.631 "data_size": 63488 00:16:12.631 }, 00:16:12.631 { 00:16:12.631 "name": "BaseBdev2", 00:16:12.631 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:12.631 "is_configured": true, 00:16:12.631 "data_offset": 2048, 00:16:12.631 "data_size": 63488 00:16:12.631 }, 00:16:12.631 { 00:16:12.631 "name": "BaseBdev3", 00:16:12.631 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:12.631 "is_configured": true, 00:16:12.631 "data_offset": 2048, 00:16:12.631 "data_size": 63488 00:16:12.631 } 00:16:12.631 ] 00:16:12.631 }' 00:16:12.631 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.631 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.631 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.631 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.631 10:00:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.571 10:00:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.571 10:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.571 "name": "raid_bdev1", 00:16:13.571 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:13.571 "strip_size_kb": 64, 00:16:13.571 "state": "online", 00:16:13.571 "raid_level": "raid5f", 00:16:13.571 "superblock": true, 00:16:13.571 "num_base_bdevs": 3, 00:16:13.571 "num_base_bdevs_discovered": 3, 00:16:13.571 "num_base_bdevs_operational": 3, 00:16:13.571 "process": { 00:16:13.571 "type": "rebuild", 00:16:13.571 "target": "spare", 00:16:13.571 "progress": { 00:16:13.571 "blocks": 69632, 00:16:13.571 "percent": 54 00:16:13.571 } 00:16:13.571 }, 00:16:13.571 "base_bdevs_list": [ 00:16:13.571 { 00:16:13.571 "name": "spare", 00:16:13.571 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:13.571 "is_configured": true, 00:16:13.571 "data_offset": 2048, 00:16:13.571 "data_size": 63488 00:16:13.571 }, 00:16:13.571 { 00:16:13.571 "name": "BaseBdev2", 00:16:13.571 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:13.571 "is_configured": true, 00:16:13.571 "data_offset": 2048, 00:16:13.571 "data_size": 63488 00:16:13.571 }, 00:16:13.571 { 00:16:13.571 "name": "BaseBdev3", 00:16:13.571 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:13.571 "is_configured": true, 00:16:13.571 "data_offset": 2048, 00:16:13.571 "data_size": 63488 00:16:13.571 } 00:16:13.571 ] 00:16:13.571 }' 00:16:13.571 10:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.571 10:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.571 10:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.571 10:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.571 10:00:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.511 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.511 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.511 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.511 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.511 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.511 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.511 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.511 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.511 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.511 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.771 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.771 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.771 "name": "raid_bdev1", 00:16:14.771 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:14.771 "strip_size_kb": 64, 00:16:14.771 "state": "online", 00:16:14.771 "raid_level": "raid5f", 00:16:14.771 "superblock": true, 00:16:14.771 "num_base_bdevs": 3, 00:16:14.771 "num_base_bdevs_discovered": 3, 00:16:14.771 "num_base_bdevs_operational": 3, 00:16:14.771 "process": { 00:16:14.771 "type": "rebuild", 00:16:14.771 "target": "spare", 00:16:14.771 "progress": { 00:16:14.771 "blocks": 92160, 00:16:14.771 "percent": 72 00:16:14.771 } 00:16:14.771 }, 00:16:14.771 "base_bdevs_list": [ 00:16:14.771 { 00:16:14.771 "name": "spare", 00:16:14.771 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:14.771 "is_configured": true, 00:16:14.771 "data_offset": 2048, 00:16:14.771 "data_size": 63488 00:16:14.771 }, 00:16:14.771 { 00:16:14.771 "name": "BaseBdev2", 00:16:14.771 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:14.771 "is_configured": true, 00:16:14.771 "data_offset": 2048, 00:16:14.771 "data_size": 63488 00:16:14.771 }, 00:16:14.771 { 00:16:14.771 "name": "BaseBdev3", 00:16:14.771 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:14.771 "is_configured": true, 00:16:14.771 "data_offset": 2048, 00:16:14.771 "data_size": 63488 00:16:14.771 } 00:16:14.771 ] 00:16:14.771 }' 00:16:14.771 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.771 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.771 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.771 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.771 10:00:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.711 "name": "raid_bdev1", 00:16:15.711 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:15.711 "strip_size_kb": 64, 00:16:15.711 "state": "online", 00:16:15.711 "raid_level": "raid5f", 00:16:15.711 "superblock": true, 00:16:15.711 "num_base_bdevs": 3, 00:16:15.711 "num_base_bdevs_discovered": 3, 00:16:15.711 "num_base_bdevs_operational": 3, 00:16:15.711 "process": { 00:16:15.711 "type": "rebuild", 00:16:15.711 "target": "spare", 00:16:15.711 "progress": { 00:16:15.711 "blocks": 114688, 00:16:15.711 "percent": 90 00:16:15.711 } 00:16:15.711 }, 00:16:15.711 "base_bdevs_list": [ 00:16:15.711 { 00:16:15.711 "name": "spare", 00:16:15.711 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:15.711 "is_configured": true, 00:16:15.711 "data_offset": 2048, 00:16:15.711 "data_size": 63488 00:16:15.711 }, 00:16:15.711 { 00:16:15.711 "name": "BaseBdev2", 00:16:15.711 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:15.711 "is_configured": true, 00:16:15.711 "data_offset": 2048, 00:16:15.711 "data_size": 63488 00:16:15.711 }, 00:16:15.711 { 00:16:15.711 "name": "BaseBdev3", 00:16:15.711 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:15.711 "is_configured": true, 00:16:15.711 "data_offset": 2048, 00:16:15.711 "data_size": 63488 00:16:15.711 } 00:16:15.711 ] 00:16:15.711 }' 00:16:15.711 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.971 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.971 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.971 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.971 10:00:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.230 [2024-10-21 10:00:52.784305] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:16.230 [2024-10-21 10:00:52.784401] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:16.230 [2024-10-21 10:00:52.784521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.170 "name": "raid_bdev1", 00:16:17.170 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:17.170 "strip_size_kb": 64, 00:16:17.170 "state": "online", 00:16:17.170 "raid_level": "raid5f", 00:16:17.170 "superblock": true, 00:16:17.170 "num_base_bdevs": 3, 00:16:17.170 "num_base_bdevs_discovered": 3, 00:16:17.170 "num_base_bdevs_operational": 3, 00:16:17.170 "base_bdevs_list": [ 00:16:17.170 { 00:16:17.170 "name": "spare", 00:16:17.170 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:17.170 "is_configured": true, 00:16:17.170 "data_offset": 2048, 00:16:17.170 "data_size": 63488 00:16:17.170 }, 00:16:17.170 { 00:16:17.170 "name": "BaseBdev2", 00:16:17.170 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:17.170 "is_configured": true, 00:16:17.170 "data_offset": 2048, 00:16:17.170 "data_size": 63488 00:16:17.170 }, 00:16:17.170 { 00:16:17.170 "name": "BaseBdev3", 00:16:17.170 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:17.170 "is_configured": true, 00:16:17.170 "data_offset": 2048, 00:16:17.170 "data_size": 63488 00:16:17.170 } 00:16:17.170 ] 00:16:17.170 }' 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.170 "name": "raid_bdev1", 00:16:17.170 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:17.170 "strip_size_kb": 64, 00:16:17.170 "state": "online", 00:16:17.170 "raid_level": "raid5f", 00:16:17.170 "superblock": true, 00:16:17.170 "num_base_bdevs": 3, 00:16:17.170 "num_base_bdevs_discovered": 3, 00:16:17.170 "num_base_bdevs_operational": 3, 00:16:17.170 "base_bdevs_list": [ 00:16:17.170 { 00:16:17.170 "name": "spare", 00:16:17.170 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:17.170 "is_configured": true, 00:16:17.170 "data_offset": 2048, 00:16:17.170 "data_size": 63488 00:16:17.170 }, 00:16:17.170 { 00:16:17.170 "name": "BaseBdev2", 00:16:17.170 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:17.170 "is_configured": true, 00:16:17.170 "data_offset": 2048, 00:16:17.170 "data_size": 63488 00:16:17.170 }, 00:16:17.170 { 00:16:17.170 "name": "BaseBdev3", 00:16:17.170 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:17.170 "is_configured": true, 00:16:17.170 "data_offset": 2048, 00:16:17.170 "data_size": 63488 00:16:17.170 } 00:16:17.170 ] 00:16:17.170 }' 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.170 "name": "raid_bdev1", 00:16:17.170 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:17.170 "strip_size_kb": 64, 00:16:17.170 "state": "online", 00:16:17.170 "raid_level": "raid5f", 00:16:17.170 "superblock": true, 00:16:17.170 "num_base_bdevs": 3, 00:16:17.170 "num_base_bdevs_discovered": 3, 00:16:17.170 "num_base_bdevs_operational": 3, 00:16:17.170 "base_bdevs_list": [ 00:16:17.170 { 00:16:17.170 "name": "spare", 00:16:17.170 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:17.170 "is_configured": true, 00:16:17.170 "data_offset": 2048, 00:16:17.170 "data_size": 63488 00:16:17.170 }, 00:16:17.170 { 00:16:17.170 "name": "BaseBdev2", 00:16:17.170 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:17.170 "is_configured": true, 00:16:17.170 "data_offset": 2048, 00:16:17.170 "data_size": 63488 00:16:17.170 }, 00:16:17.170 { 00:16:17.170 "name": "BaseBdev3", 00:16:17.170 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:17.170 "is_configured": true, 00:16:17.170 "data_offset": 2048, 00:16:17.170 "data_size": 63488 00:16:17.170 } 00:16:17.170 ] 00:16:17.170 }' 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.170 10:00:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.740 [2024-10-21 10:00:54.161459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.740 [2024-10-21 10:00:54.161499] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.740 [2024-10-21 10:00:54.161649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.740 [2024-10-21 10:00:54.161758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.740 [2024-10-21 10:00:54.161800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:17.740 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:18.000 /dev/nbd0 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.000 1+0 records in 00:16:18.000 1+0 records out 00:16:18.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336096 s, 12.2 MB/s 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:18.000 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:18.260 /dev/nbd1 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.260 1+0 records in 00:16:18.260 1+0 records out 00:16:18.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524955 s, 7.8 MB/s 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:18.260 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:18.520 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:18.520 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.520 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:18.520 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:18.520 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:18.520 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.520 10:00:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.780 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.780 [2024-10-21 10:00:55.371856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:18.780 [2024-10-21 10:00:55.371959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.780 [2024-10-21 10:00:55.371990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:18.780 [2024-10-21 10:00:55.372005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.040 [2024-10-21 10:00:55.375037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.040 [2024-10-21 10:00:55.375089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:19.040 [2024-10-21 10:00:55.375220] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:19.040 [2024-10-21 10:00:55.375307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.040 [2024-10-21 10:00:55.375476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.040 [2024-10-21 10:00:55.375623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.040 spare 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.040 [2024-10-21 10:00:55.475578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:16:19.040 [2024-10-21 10:00:55.475656] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:19.040 [2024-10-21 10:00:55.476119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047490 00:16:19.040 [2024-10-21 10:00:55.482781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:16:19.040 [2024-10-21 10:00:55.482814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005f00 00:16:19.040 [2024-10-21 10:00:55.483120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.040 "name": "raid_bdev1", 00:16:19.040 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:19.040 "strip_size_kb": 64, 00:16:19.040 "state": "online", 00:16:19.040 "raid_level": "raid5f", 00:16:19.040 "superblock": true, 00:16:19.040 "num_base_bdevs": 3, 00:16:19.040 "num_base_bdevs_discovered": 3, 00:16:19.040 "num_base_bdevs_operational": 3, 00:16:19.040 "base_bdevs_list": [ 00:16:19.040 { 00:16:19.040 "name": "spare", 00:16:19.040 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:19.040 "is_configured": true, 00:16:19.040 "data_offset": 2048, 00:16:19.040 "data_size": 63488 00:16:19.040 }, 00:16:19.040 { 00:16:19.040 "name": "BaseBdev2", 00:16:19.040 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:19.040 "is_configured": true, 00:16:19.040 "data_offset": 2048, 00:16:19.040 "data_size": 63488 00:16:19.040 }, 00:16:19.040 { 00:16:19.040 "name": "BaseBdev3", 00:16:19.040 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:19.040 "is_configured": true, 00:16:19.040 "data_offset": 2048, 00:16:19.040 "data_size": 63488 00:16:19.040 } 00:16:19.040 ] 00:16:19.040 }' 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.040 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.640 "name": "raid_bdev1", 00:16:19.640 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:19.640 "strip_size_kb": 64, 00:16:19.640 "state": "online", 00:16:19.640 "raid_level": "raid5f", 00:16:19.640 "superblock": true, 00:16:19.640 "num_base_bdevs": 3, 00:16:19.640 "num_base_bdevs_discovered": 3, 00:16:19.640 "num_base_bdevs_operational": 3, 00:16:19.640 "base_bdevs_list": [ 00:16:19.640 { 00:16:19.640 "name": "spare", 00:16:19.640 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:19.640 "is_configured": true, 00:16:19.640 "data_offset": 2048, 00:16:19.640 "data_size": 63488 00:16:19.640 }, 00:16:19.640 { 00:16:19.640 "name": "BaseBdev2", 00:16:19.640 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:19.640 "is_configured": true, 00:16:19.640 "data_offset": 2048, 00:16:19.640 "data_size": 63488 00:16:19.640 }, 00:16:19.640 { 00:16:19.640 "name": "BaseBdev3", 00:16:19.640 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:19.640 "is_configured": true, 00:16:19.640 "data_offset": 2048, 00:16:19.640 "data_size": 63488 00:16:19.640 } 00:16:19.640 ] 00:16:19.640 }' 00:16:19.640 10:00:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.640 [2024-10-21 10:00:56.133808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.640 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.641 "name": "raid_bdev1", 00:16:19.641 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:19.641 "strip_size_kb": 64, 00:16:19.641 "state": "online", 00:16:19.641 "raid_level": "raid5f", 00:16:19.641 "superblock": true, 00:16:19.641 "num_base_bdevs": 3, 00:16:19.641 "num_base_bdevs_discovered": 2, 00:16:19.641 "num_base_bdevs_operational": 2, 00:16:19.641 "base_bdevs_list": [ 00:16:19.641 { 00:16:19.641 "name": null, 00:16:19.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.641 "is_configured": false, 00:16:19.641 "data_offset": 0, 00:16:19.641 "data_size": 63488 00:16:19.641 }, 00:16:19.641 { 00:16:19.641 "name": "BaseBdev2", 00:16:19.641 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:19.641 "is_configured": true, 00:16:19.641 "data_offset": 2048, 00:16:19.641 "data_size": 63488 00:16:19.641 }, 00:16:19.641 { 00:16:19.641 "name": "BaseBdev3", 00:16:19.641 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:19.641 "is_configured": true, 00:16:19.641 "data_offset": 2048, 00:16:19.641 "data_size": 63488 00:16:19.641 } 00:16:19.641 ] 00:16:19.641 }' 00:16:19.641 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.641 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.211 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:20.211 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.211 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.211 [2024-10-21 10:00:56.561126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.211 [2024-10-21 10:00:56.561369] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:20.211 [2024-10-21 10:00:56.561396] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:20.211 [2024-10-21 10:00:56.561440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.211 [2024-10-21 10:00:56.579663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:16:20.211 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.211 10:00:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:20.211 [2024-10-21 10:00:56.587100] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.151 "name": "raid_bdev1", 00:16:21.151 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:21.151 "strip_size_kb": 64, 00:16:21.151 "state": "online", 00:16:21.151 "raid_level": "raid5f", 00:16:21.151 "superblock": true, 00:16:21.151 "num_base_bdevs": 3, 00:16:21.151 "num_base_bdevs_discovered": 3, 00:16:21.151 "num_base_bdevs_operational": 3, 00:16:21.151 "process": { 00:16:21.151 "type": "rebuild", 00:16:21.151 "target": "spare", 00:16:21.151 "progress": { 00:16:21.151 "blocks": 20480, 00:16:21.151 "percent": 16 00:16:21.151 } 00:16:21.151 }, 00:16:21.151 "base_bdevs_list": [ 00:16:21.151 { 00:16:21.151 "name": "spare", 00:16:21.151 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:21.151 "is_configured": true, 00:16:21.151 "data_offset": 2048, 00:16:21.151 "data_size": 63488 00:16:21.151 }, 00:16:21.151 { 00:16:21.151 "name": "BaseBdev2", 00:16:21.151 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:21.151 "is_configured": true, 00:16:21.151 "data_offset": 2048, 00:16:21.151 "data_size": 63488 00:16:21.151 }, 00:16:21.151 { 00:16:21.151 "name": "BaseBdev3", 00:16:21.151 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:21.151 "is_configured": true, 00:16:21.151 "data_offset": 2048, 00:16:21.151 "data_size": 63488 00:16:21.151 } 00:16:21.151 ] 00:16:21.151 }' 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.151 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.151 [2024-10-21 10:00:57.743628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.412 [2024-10-21 10:00:57.801439] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:21.412 [2024-10-21 10:00:57.801523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.412 [2024-10-21 10:00:57.801543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.412 [2024-10-21 10:00:57.801553] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.412 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.412 "name": "raid_bdev1", 00:16:21.412 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:21.412 "strip_size_kb": 64, 00:16:21.412 "state": "online", 00:16:21.412 "raid_level": "raid5f", 00:16:21.412 "superblock": true, 00:16:21.412 "num_base_bdevs": 3, 00:16:21.412 "num_base_bdevs_discovered": 2, 00:16:21.412 "num_base_bdevs_operational": 2, 00:16:21.412 "base_bdevs_list": [ 00:16:21.412 { 00:16:21.412 "name": null, 00:16:21.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.412 "is_configured": false, 00:16:21.412 "data_offset": 0, 00:16:21.412 "data_size": 63488 00:16:21.413 }, 00:16:21.413 { 00:16:21.413 "name": "BaseBdev2", 00:16:21.413 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:21.413 "is_configured": true, 00:16:21.413 "data_offset": 2048, 00:16:21.413 "data_size": 63488 00:16:21.413 }, 00:16:21.413 { 00:16:21.413 "name": "BaseBdev3", 00:16:21.413 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:21.413 "is_configured": true, 00:16:21.413 "data_offset": 2048, 00:16:21.413 "data_size": 63488 00:16:21.413 } 00:16:21.413 ] 00:16:21.413 }' 00:16:21.413 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.413 10:00:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.982 10:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:21.982 10:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.982 10:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.982 [2024-10-21 10:00:58.320916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:21.982 [2024-10-21 10:00:58.321004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.982 [2024-10-21 10:00:58.321033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:21.982 [2024-10-21 10:00:58.321052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.982 [2024-10-21 10:00:58.321703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.982 [2024-10-21 10:00:58.321738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:21.982 [2024-10-21 10:00:58.321867] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:21.982 [2024-10-21 10:00:58.321893] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:21.982 [2024-10-21 10:00:58.321906] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:21.982 [2024-10-21 10:00:58.321934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.982 [2024-10-21 10:00:58.340432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:16:21.982 spare 00:16:21.982 10:00:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.982 10:00:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:21.982 [2024-10-21 10:00:58.348445] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.922 "name": "raid_bdev1", 00:16:22.922 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:22.922 "strip_size_kb": 64, 00:16:22.922 "state": "online", 00:16:22.922 "raid_level": "raid5f", 00:16:22.922 "superblock": true, 00:16:22.922 "num_base_bdevs": 3, 00:16:22.922 "num_base_bdevs_discovered": 3, 00:16:22.922 "num_base_bdevs_operational": 3, 00:16:22.922 "process": { 00:16:22.922 "type": "rebuild", 00:16:22.922 "target": "spare", 00:16:22.922 "progress": { 00:16:22.922 "blocks": 20480, 00:16:22.922 "percent": 16 00:16:22.922 } 00:16:22.922 }, 00:16:22.922 "base_bdevs_list": [ 00:16:22.922 { 00:16:22.922 "name": "spare", 00:16:22.922 "uuid": "bc160262-77ac-5c24-9a0c-b8a878581756", 00:16:22.922 "is_configured": true, 00:16:22.922 "data_offset": 2048, 00:16:22.922 "data_size": 63488 00:16:22.922 }, 00:16:22.922 { 00:16:22.922 "name": "BaseBdev2", 00:16:22.922 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:22.922 "is_configured": true, 00:16:22.922 "data_offset": 2048, 00:16:22.922 "data_size": 63488 00:16:22.922 }, 00:16:22.922 { 00:16:22.922 "name": "BaseBdev3", 00:16:22.922 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:22.922 "is_configured": true, 00:16:22.922 "data_offset": 2048, 00:16:22.922 "data_size": 63488 00:16:22.922 } 00:16:22.922 ] 00:16:22.922 }' 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.922 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.922 [2024-10-21 10:00:59.503976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.181 [2024-10-21 10:00:59.561598] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.181 [2024-10-21 10:00:59.561678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.182 [2024-10-21 10:00:59.561699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.182 [2024-10-21 10:00:59.561707] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.182 "name": "raid_bdev1", 00:16:23.182 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:23.182 "strip_size_kb": 64, 00:16:23.182 "state": "online", 00:16:23.182 "raid_level": "raid5f", 00:16:23.182 "superblock": true, 00:16:23.182 "num_base_bdevs": 3, 00:16:23.182 "num_base_bdevs_discovered": 2, 00:16:23.182 "num_base_bdevs_operational": 2, 00:16:23.182 "base_bdevs_list": [ 00:16:23.182 { 00:16:23.182 "name": null, 00:16:23.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.182 "is_configured": false, 00:16:23.182 "data_offset": 0, 00:16:23.182 "data_size": 63488 00:16:23.182 }, 00:16:23.182 { 00:16:23.182 "name": "BaseBdev2", 00:16:23.182 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:23.182 "is_configured": true, 00:16:23.182 "data_offset": 2048, 00:16:23.182 "data_size": 63488 00:16:23.182 }, 00:16:23.182 { 00:16:23.182 "name": "BaseBdev3", 00:16:23.182 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:23.182 "is_configured": true, 00:16:23.182 "data_offset": 2048, 00:16:23.182 "data_size": 63488 00:16:23.182 } 00:16:23.182 ] 00:16:23.182 }' 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.182 10:00:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.442 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.442 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.442 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.442 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.442 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.702 "name": "raid_bdev1", 00:16:23.702 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:23.702 "strip_size_kb": 64, 00:16:23.702 "state": "online", 00:16:23.702 "raid_level": "raid5f", 00:16:23.702 "superblock": true, 00:16:23.702 "num_base_bdevs": 3, 00:16:23.702 "num_base_bdevs_discovered": 2, 00:16:23.702 "num_base_bdevs_operational": 2, 00:16:23.702 "base_bdevs_list": [ 00:16:23.702 { 00:16:23.702 "name": null, 00:16:23.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.702 "is_configured": false, 00:16:23.702 "data_offset": 0, 00:16:23.702 "data_size": 63488 00:16:23.702 }, 00:16:23.702 { 00:16:23.702 "name": "BaseBdev2", 00:16:23.702 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:23.702 "is_configured": true, 00:16:23.702 "data_offset": 2048, 00:16:23.702 "data_size": 63488 00:16:23.702 }, 00:16:23.702 { 00:16:23.702 "name": "BaseBdev3", 00:16:23.702 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:23.702 "is_configured": true, 00:16:23.702 "data_offset": 2048, 00:16:23.702 "data_size": 63488 00:16:23.702 } 00:16:23.702 ] 00:16:23.702 }' 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.702 [2024-10-21 10:01:00.162679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:23.702 [2024-10-21 10:01:00.162744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.702 [2024-10-21 10:01:00.162794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:23.702 [2024-10-21 10:01:00.162817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.702 [2024-10-21 10:01:00.163418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.702 [2024-10-21 10:01:00.163454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:23.702 [2024-10-21 10:01:00.163558] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:23.702 [2024-10-21 10:01:00.163588] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:23.702 [2024-10-21 10:01:00.163607] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:23.702 [2024-10-21 10:01:00.163620] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:23.702 BaseBdev1 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.702 10:01:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.642 "name": "raid_bdev1", 00:16:24.642 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:24.642 "strip_size_kb": 64, 00:16:24.642 "state": "online", 00:16:24.642 "raid_level": "raid5f", 00:16:24.642 "superblock": true, 00:16:24.642 "num_base_bdevs": 3, 00:16:24.642 "num_base_bdevs_discovered": 2, 00:16:24.642 "num_base_bdevs_operational": 2, 00:16:24.642 "base_bdevs_list": [ 00:16:24.642 { 00:16:24.642 "name": null, 00:16:24.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.642 "is_configured": false, 00:16:24.642 "data_offset": 0, 00:16:24.642 "data_size": 63488 00:16:24.642 }, 00:16:24.642 { 00:16:24.642 "name": "BaseBdev2", 00:16:24.642 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:24.642 "is_configured": true, 00:16:24.642 "data_offset": 2048, 00:16:24.642 "data_size": 63488 00:16:24.642 }, 00:16:24.642 { 00:16:24.642 "name": "BaseBdev3", 00:16:24.642 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:24.642 "is_configured": true, 00:16:24.642 "data_offset": 2048, 00:16:24.642 "data_size": 63488 00:16:24.642 } 00:16:24.642 ] 00:16:24.642 }' 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.642 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.212 "name": "raid_bdev1", 00:16:25.212 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:25.212 "strip_size_kb": 64, 00:16:25.212 "state": "online", 00:16:25.212 "raid_level": "raid5f", 00:16:25.212 "superblock": true, 00:16:25.212 "num_base_bdevs": 3, 00:16:25.212 "num_base_bdevs_discovered": 2, 00:16:25.212 "num_base_bdevs_operational": 2, 00:16:25.212 "base_bdevs_list": [ 00:16:25.212 { 00:16:25.212 "name": null, 00:16:25.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.212 "is_configured": false, 00:16:25.212 "data_offset": 0, 00:16:25.212 "data_size": 63488 00:16:25.212 }, 00:16:25.212 { 00:16:25.212 "name": "BaseBdev2", 00:16:25.212 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:25.212 "is_configured": true, 00:16:25.212 "data_offset": 2048, 00:16:25.212 "data_size": 63488 00:16:25.212 }, 00:16:25.212 { 00:16:25.212 "name": "BaseBdev3", 00:16:25.212 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:25.212 "is_configured": true, 00:16:25.212 "data_offset": 2048, 00:16:25.212 "data_size": 63488 00:16:25.212 } 00:16:25.212 ] 00:16:25.212 }' 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.212 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.472 [2024-10-21 10:01:01.847949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.472 [2024-10-21 10:01:01.848248] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:25.472 [2024-10-21 10:01:01.848273] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:25.472 request: 00:16:25.472 { 00:16:25.472 "base_bdev": "BaseBdev1", 00:16:25.472 "raid_bdev": "raid_bdev1", 00:16:25.472 "method": "bdev_raid_add_base_bdev", 00:16:25.472 "req_id": 1 00:16:25.472 } 00:16:25.472 Got JSON-RPC error response 00:16:25.472 response: 00:16:25.472 { 00:16:25.472 "code": -22, 00:16:25.472 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:25.472 } 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:25.472 10:01:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.410 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.410 "name": "raid_bdev1", 00:16:26.410 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:26.411 "strip_size_kb": 64, 00:16:26.411 "state": "online", 00:16:26.411 "raid_level": "raid5f", 00:16:26.411 "superblock": true, 00:16:26.411 "num_base_bdevs": 3, 00:16:26.411 "num_base_bdevs_discovered": 2, 00:16:26.411 "num_base_bdevs_operational": 2, 00:16:26.411 "base_bdevs_list": [ 00:16:26.411 { 00:16:26.411 "name": null, 00:16:26.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.411 "is_configured": false, 00:16:26.411 "data_offset": 0, 00:16:26.411 "data_size": 63488 00:16:26.411 }, 00:16:26.411 { 00:16:26.411 "name": "BaseBdev2", 00:16:26.411 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:26.411 "is_configured": true, 00:16:26.411 "data_offset": 2048, 00:16:26.411 "data_size": 63488 00:16:26.411 }, 00:16:26.411 { 00:16:26.411 "name": "BaseBdev3", 00:16:26.411 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:26.411 "is_configured": true, 00:16:26.411 "data_offset": 2048, 00:16:26.411 "data_size": 63488 00:16:26.411 } 00:16:26.411 ] 00:16:26.411 }' 00:16:26.411 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.411 10:01:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.980 "name": "raid_bdev1", 00:16:26.980 "uuid": "7cec6f9b-453f-48fa-b9fc-4114253c1f96", 00:16:26.980 "strip_size_kb": 64, 00:16:26.980 "state": "online", 00:16:26.980 "raid_level": "raid5f", 00:16:26.980 "superblock": true, 00:16:26.980 "num_base_bdevs": 3, 00:16:26.980 "num_base_bdevs_discovered": 2, 00:16:26.980 "num_base_bdevs_operational": 2, 00:16:26.980 "base_bdevs_list": [ 00:16:26.980 { 00:16:26.980 "name": null, 00:16:26.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.980 "is_configured": false, 00:16:26.980 "data_offset": 0, 00:16:26.980 "data_size": 63488 00:16:26.980 }, 00:16:26.980 { 00:16:26.980 "name": "BaseBdev2", 00:16:26.980 "uuid": "1f49b0cd-fe9b-5764-a026-59bd17ae019e", 00:16:26.980 "is_configured": true, 00:16:26.980 "data_offset": 2048, 00:16:26.980 "data_size": 63488 00:16:26.980 }, 00:16:26.980 { 00:16:26.980 "name": "BaseBdev3", 00:16:26.980 "uuid": "2f13bdf7-cb03-57fb-8da4-1c5803129663", 00:16:26.980 "is_configured": true, 00:16:26.980 "data_offset": 2048, 00:16:26.980 "data_size": 63488 00:16:26.980 } 00:16:26.980 ] 00:16:26.980 }' 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81667 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81667 ']' 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 81667 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81667 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.980 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81667' 00:16:26.980 killing process with pid 81667 00:16:26.980 Received shutdown signal, test time was about 60.000000 seconds 00:16:26.980 00:16:26.980 Latency(us) 00:16:26.980 [2024-10-21T10:01:03.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.980 [2024-10-21T10:01:03.575Z] =================================================================================================================== 00:16:26.980 [2024-10-21T10:01:03.575Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:26.981 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 81667 00:16:26.981 [2024-10-21 10:01:03.503753] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.981 10:01:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 81667 00:16:26.981 [2024-10-21 10:01:03.503941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.981 [2024-10-21 10:01:03.504023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.981 [2024-10-21 10:01:03.504037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state offline 00:16:27.549 [2024-10-21 10:01:03.928615] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.931 10:01:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:28.931 ************************************ 00:16:28.931 END TEST raid5f_rebuild_test_sb 00:16:28.931 ************************************ 00:16:28.931 00:16:28.931 real 0m23.599s 00:16:28.931 user 0m30.026s 00:16:28.931 sys 0m2.965s 00:16:28.931 10:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.931 10:01:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.931 10:01:05 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:28.931 10:01:05 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:28.931 10:01:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:28.931 10:01:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.931 10:01:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.931 ************************************ 00:16:28.931 START TEST raid5f_state_function_test 00:16:28.931 ************************************ 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82424 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82424' 00:16:28.931 Process raid pid: 82424 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82424 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82424 ']' 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.931 10:01:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.931 [2024-10-21 10:01:05.371010] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:16:28.931 [2024-10-21 10:01:05.371226] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.191 [2024-10-21 10:01:05.534391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.191 [2024-10-21 10:01:05.682404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.451 [2024-10-21 10:01:05.942007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.451 [2024-10-21 10:01:05.942179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.710 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.710 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.711 [2024-10-21 10:01:06.230759] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.711 [2024-10-21 10:01:06.230901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.711 [2024-10-21 10:01:06.230941] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.711 [2024-10-21 10:01:06.230967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.711 [2024-10-21 10:01:06.230986] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.711 [2024-10-21 10:01:06.231007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.711 [2024-10-21 10:01:06.231041] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:29.711 [2024-10-21 10:01:06.231067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.711 "name": "Existed_Raid", 00:16:29.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.711 "strip_size_kb": 64, 00:16:29.711 "state": "configuring", 00:16:29.711 "raid_level": "raid5f", 00:16:29.711 "superblock": false, 00:16:29.711 "num_base_bdevs": 4, 00:16:29.711 "num_base_bdevs_discovered": 0, 00:16:29.711 "num_base_bdevs_operational": 4, 00:16:29.711 "base_bdevs_list": [ 00:16:29.711 { 00:16:29.711 "name": "BaseBdev1", 00:16:29.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.711 "is_configured": false, 00:16:29.711 "data_offset": 0, 00:16:29.711 "data_size": 0 00:16:29.711 }, 00:16:29.711 { 00:16:29.711 "name": "BaseBdev2", 00:16:29.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.711 "is_configured": false, 00:16:29.711 "data_offset": 0, 00:16:29.711 "data_size": 0 00:16:29.711 }, 00:16:29.711 { 00:16:29.711 "name": "BaseBdev3", 00:16:29.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.711 "is_configured": false, 00:16:29.711 "data_offset": 0, 00:16:29.711 "data_size": 0 00:16:29.711 }, 00:16:29.711 { 00:16:29.711 "name": "BaseBdev4", 00:16:29.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.711 "is_configured": false, 00:16:29.711 "data_offset": 0, 00:16:29.711 "data_size": 0 00:16:29.711 } 00:16:29.711 ] 00:16:29.711 }' 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.711 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.281 [2024-10-21 10:01:06.681907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.281 [2024-10-21 10:01:06.681961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.281 [2024-10-21 10:01:06.693911] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.281 [2024-10-21 10:01:06.694026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.281 [2024-10-21 10:01:06.694075] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.281 [2024-10-21 10:01:06.694101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.281 [2024-10-21 10:01:06.694134] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.281 [2024-10-21 10:01:06.694164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.281 [2024-10-21 10:01:06.694200] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:30.281 [2024-10-21 10:01:06.694231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.281 [2024-10-21 10:01:06.747978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.281 BaseBdev1 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.281 [ 00:16:30.281 { 00:16:30.281 "name": "BaseBdev1", 00:16:30.281 "aliases": [ 00:16:30.281 "dccd4454-8b23-4128-b868-495fa37a772a" 00:16:30.281 ], 00:16:30.281 "product_name": "Malloc disk", 00:16:30.281 "block_size": 512, 00:16:30.281 "num_blocks": 65536, 00:16:30.281 "uuid": "dccd4454-8b23-4128-b868-495fa37a772a", 00:16:30.281 "assigned_rate_limits": { 00:16:30.281 "rw_ios_per_sec": 0, 00:16:30.281 "rw_mbytes_per_sec": 0, 00:16:30.281 "r_mbytes_per_sec": 0, 00:16:30.281 "w_mbytes_per_sec": 0 00:16:30.281 }, 00:16:30.281 "claimed": true, 00:16:30.281 "claim_type": "exclusive_write", 00:16:30.281 "zoned": false, 00:16:30.281 "supported_io_types": { 00:16:30.281 "read": true, 00:16:30.281 "write": true, 00:16:30.281 "unmap": true, 00:16:30.281 "flush": true, 00:16:30.281 "reset": true, 00:16:30.281 "nvme_admin": false, 00:16:30.281 "nvme_io": false, 00:16:30.281 "nvme_io_md": false, 00:16:30.281 "write_zeroes": true, 00:16:30.281 "zcopy": true, 00:16:30.281 "get_zone_info": false, 00:16:30.281 "zone_management": false, 00:16:30.281 "zone_append": false, 00:16:30.281 "compare": false, 00:16:30.281 "compare_and_write": false, 00:16:30.281 "abort": true, 00:16:30.281 "seek_hole": false, 00:16:30.281 "seek_data": false, 00:16:30.281 "copy": true, 00:16:30.281 "nvme_iov_md": false 00:16:30.281 }, 00:16:30.281 "memory_domains": [ 00:16:30.281 { 00:16:30.281 "dma_device_id": "system", 00:16:30.281 "dma_device_type": 1 00:16:30.281 }, 00:16:30.281 { 00:16:30.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.281 "dma_device_type": 2 00:16:30.281 } 00:16:30.281 ], 00:16:30.281 "driver_specific": {} 00:16:30.281 } 00:16:30.281 ] 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.281 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.282 "name": "Existed_Raid", 00:16:30.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.282 "strip_size_kb": 64, 00:16:30.282 "state": "configuring", 00:16:30.282 "raid_level": "raid5f", 00:16:30.282 "superblock": false, 00:16:30.282 "num_base_bdevs": 4, 00:16:30.282 "num_base_bdevs_discovered": 1, 00:16:30.282 "num_base_bdevs_operational": 4, 00:16:30.282 "base_bdevs_list": [ 00:16:30.282 { 00:16:30.282 "name": "BaseBdev1", 00:16:30.282 "uuid": "dccd4454-8b23-4128-b868-495fa37a772a", 00:16:30.282 "is_configured": true, 00:16:30.282 "data_offset": 0, 00:16:30.282 "data_size": 65536 00:16:30.282 }, 00:16:30.282 { 00:16:30.282 "name": "BaseBdev2", 00:16:30.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.282 "is_configured": false, 00:16:30.282 "data_offset": 0, 00:16:30.282 "data_size": 0 00:16:30.282 }, 00:16:30.282 { 00:16:30.282 "name": "BaseBdev3", 00:16:30.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.282 "is_configured": false, 00:16:30.282 "data_offset": 0, 00:16:30.282 "data_size": 0 00:16:30.282 }, 00:16:30.282 { 00:16:30.282 "name": "BaseBdev4", 00:16:30.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.282 "is_configured": false, 00:16:30.282 "data_offset": 0, 00:16:30.282 "data_size": 0 00:16:30.282 } 00:16:30.282 ] 00:16:30.282 }' 00:16:30.282 10:01:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.282 10:01:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.852 [2024-10-21 10:01:07.219221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.852 [2024-10-21 10:01:07.219285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.852 [2024-10-21 10:01:07.227260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.852 [2024-10-21 10:01:07.229610] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.852 [2024-10-21 10:01:07.229690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.852 [2024-10-21 10:01:07.229721] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.852 [2024-10-21 10:01:07.229747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.852 [2024-10-21 10:01:07.229767] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:30.852 [2024-10-21 10:01:07.229788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.852 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.852 "name": "Existed_Raid", 00:16:30.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.852 "strip_size_kb": 64, 00:16:30.852 "state": "configuring", 00:16:30.852 "raid_level": "raid5f", 00:16:30.852 "superblock": false, 00:16:30.852 "num_base_bdevs": 4, 00:16:30.852 "num_base_bdevs_discovered": 1, 00:16:30.852 "num_base_bdevs_operational": 4, 00:16:30.852 "base_bdevs_list": [ 00:16:30.852 { 00:16:30.852 "name": "BaseBdev1", 00:16:30.852 "uuid": "dccd4454-8b23-4128-b868-495fa37a772a", 00:16:30.852 "is_configured": true, 00:16:30.852 "data_offset": 0, 00:16:30.852 "data_size": 65536 00:16:30.852 }, 00:16:30.852 { 00:16:30.852 "name": "BaseBdev2", 00:16:30.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.853 "is_configured": false, 00:16:30.853 "data_offset": 0, 00:16:30.853 "data_size": 0 00:16:30.853 }, 00:16:30.853 { 00:16:30.853 "name": "BaseBdev3", 00:16:30.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.853 "is_configured": false, 00:16:30.853 "data_offset": 0, 00:16:30.853 "data_size": 0 00:16:30.853 }, 00:16:30.853 { 00:16:30.853 "name": "BaseBdev4", 00:16:30.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.853 "is_configured": false, 00:16:30.853 "data_offset": 0, 00:16:30.853 "data_size": 0 00:16:30.853 } 00:16:30.853 ] 00:16:30.853 }' 00:16:30.853 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.853 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.112 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.112 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.112 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.372 [2024-10-21 10:01:07.755284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.372 BaseBdev2 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.372 [ 00:16:31.372 { 00:16:31.372 "name": "BaseBdev2", 00:16:31.372 "aliases": [ 00:16:31.372 "8a6a91fe-74df-45d9-8697-f2105eb1ec64" 00:16:31.372 ], 00:16:31.372 "product_name": "Malloc disk", 00:16:31.372 "block_size": 512, 00:16:31.372 "num_blocks": 65536, 00:16:31.372 "uuid": "8a6a91fe-74df-45d9-8697-f2105eb1ec64", 00:16:31.372 "assigned_rate_limits": { 00:16:31.372 "rw_ios_per_sec": 0, 00:16:31.372 "rw_mbytes_per_sec": 0, 00:16:31.372 "r_mbytes_per_sec": 0, 00:16:31.372 "w_mbytes_per_sec": 0 00:16:31.372 }, 00:16:31.372 "claimed": true, 00:16:31.372 "claim_type": "exclusive_write", 00:16:31.372 "zoned": false, 00:16:31.372 "supported_io_types": { 00:16:31.372 "read": true, 00:16:31.372 "write": true, 00:16:31.372 "unmap": true, 00:16:31.372 "flush": true, 00:16:31.372 "reset": true, 00:16:31.372 "nvme_admin": false, 00:16:31.372 "nvme_io": false, 00:16:31.372 "nvme_io_md": false, 00:16:31.372 "write_zeroes": true, 00:16:31.372 "zcopy": true, 00:16:31.372 "get_zone_info": false, 00:16:31.372 "zone_management": false, 00:16:31.372 "zone_append": false, 00:16:31.372 "compare": false, 00:16:31.372 "compare_and_write": false, 00:16:31.372 "abort": true, 00:16:31.372 "seek_hole": false, 00:16:31.372 "seek_data": false, 00:16:31.372 "copy": true, 00:16:31.372 "nvme_iov_md": false 00:16:31.372 }, 00:16:31.372 "memory_domains": [ 00:16:31.372 { 00:16:31.372 "dma_device_id": "system", 00:16:31.372 "dma_device_type": 1 00:16:31.372 }, 00:16:31.372 { 00:16:31.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.372 "dma_device_type": 2 00:16:31.372 } 00:16:31.372 ], 00:16:31.372 "driver_specific": {} 00:16:31.372 } 00:16:31.372 ] 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.372 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.373 "name": "Existed_Raid", 00:16:31.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.373 "strip_size_kb": 64, 00:16:31.373 "state": "configuring", 00:16:31.373 "raid_level": "raid5f", 00:16:31.373 "superblock": false, 00:16:31.373 "num_base_bdevs": 4, 00:16:31.373 "num_base_bdevs_discovered": 2, 00:16:31.373 "num_base_bdevs_operational": 4, 00:16:31.373 "base_bdevs_list": [ 00:16:31.373 { 00:16:31.373 "name": "BaseBdev1", 00:16:31.373 "uuid": "dccd4454-8b23-4128-b868-495fa37a772a", 00:16:31.373 "is_configured": true, 00:16:31.373 "data_offset": 0, 00:16:31.373 "data_size": 65536 00:16:31.373 }, 00:16:31.373 { 00:16:31.373 "name": "BaseBdev2", 00:16:31.373 "uuid": "8a6a91fe-74df-45d9-8697-f2105eb1ec64", 00:16:31.373 "is_configured": true, 00:16:31.373 "data_offset": 0, 00:16:31.373 "data_size": 65536 00:16:31.373 }, 00:16:31.373 { 00:16:31.373 "name": "BaseBdev3", 00:16:31.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.373 "is_configured": false, 00:16:31.373 "data_offset": 0, 00:16:31.373 "data_size": 0 00:16:31.373 }, 00:16:31.373 { 00:16:31.373 "name": "BaseBdev4", 00:16:31.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.373 "is_configured": false, 00:16:31.373 "data_offset": 0, 00:16:31.373 "data_size": 0 00:16:31.373 } 00:16:31.373 ] 00:16:31.373 }' 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.373 10:01:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.633 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.633 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.633 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.892 [2024-10-21 10:01:08.271190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.892 BaseBdev3 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.892 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.892 [ 00:16:31.892 { 00:16:31.892 "name": "BaseBdev3", 00:16:31.892 "aliases": [ 00:16:31.892 "68dd7363-d876-42a7-8fc7-3630110162b8" 00:16:31.892 ], 00:16:31.892 "product_name": "Malloc disk", 00:16:31.892 "block_size": 512, 00:16:31.892 "num_blocks": 65536, 00:16:31.892 "uuid": "68dd7363-d876-42a7-8fc7-3630110162b8", 00:16:31.892 "assigned_rate_limits": { 00:16:31.892 "rw_ios_per_sec": 0, 00:16:31.892 "rw_mbytes_per_sec": 0, 00:16:31.892 "r_mbytes_per_sec": 0, 00:16:31.892 "w_mbytes_per_sec": 0 00:16:31.892 }, 00:16:31.892 "claimed": true, 00:16:31.892 "claim_type": "exclusive_write", 00:16:31.892 "zoned": false, 00:16:31.892 "supported_io_types": { 00:16:31.892 "read": true, 00:16:31.892 "write": true, 00:16:31.892 "unmap": true, 00:16:31.892 "flush": true, 00:16:31.892 "reset": true, 00:16:31.892 "nvme_admin": false, 00:16:31.892 "nvme_io": false, 00:16:31.892 "nvme_io_md": false, 00:16:31.892 "write_zeroes": true, 00:16:31.892 "zcopy": true, 00:16:31.892 "get_zone_info": false, 00:16:31.892 "zone_management": false, 00:16:31.892 "zone_append": false, 00:16:31.892 "compare": false, 00:16:31.892 "compare_and_write": false, 00:16:31.892 "abort": true, 00:16:31.892 "seek_hole": false, 00:16:31.892 "seek_data": false, 00:16:31.892 "copy": true, 00:16:31.892 "nvme_iov_md": false 00:16:31.892 }, 00:16:31.892 "memory_domains": [ 00:16:31.892 { 00:16:31.892 "dma_device_id": "system", 00:16:31.892 "dma_device_type": 1 00:16:31.893 }, 00:16:31.893 { 00:16:31.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.893 "dma_device_type": 2 00:16:31.893 } 00:16:31.893 ], 00:16:31.893 "driver_specific": {} 00:16:31.893 } 00:16:31.893 ] 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.893 "name": "Existed_Raid", 00:16:31.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.893 "strip_size_kb": 64, 00:16:31.893 "state": "configuring", 00:16:31.893 "raid_level": "raid5f", 00:16:31.893 "superblock": false, 00:16:31.893 "num_base_bdevs": 4, 00:16:31.893 "num_base_bdevs_discovered": 3, 00:16:31.893 "num_base_bdevs_operational": 4, 00:16:31.893 "base_bdevs_list": [ 00:16:31.893 { 00:16:31.893 "name": "BaseBdev1", 00:16:31.893 "uuid": "dccd4454-8b23-4128-b868-495fa37a772a", 00:16:31.893 "is_configured": true, 00:16:31.893 "data_offset": 0, 00:16:31.893 "data_size": 65536 00:16:31.893 }, 00:16:31.893 { 00:16:31.893 "name": "BaseBdev2", 00:16:31.893 "uuid": "8a6a91fe-74df-45d9-8697-f2105eb1ec64", 00:16:31.893 "is_configured": true, 00:16:31.893 "data_offset": 0, 00:16:31.893 "data_size": 65536 00:16:31.893 }, 00:16:31.893 { 00:16:31.893 "name": "BaseBdev3", 00:16:31.893 "uuid": "68dd7363-d876-42a7-8fc7-3630110162b8", 00:16:31.893 "is_configured": true, 00:16:31.893 "data_offset": 0, 00:16:31.893 "data_size": 65536 00:16:31.893 }, 00:16:31.893 { 00:16:31.893 "name": "BaseBdev4", 00:16:31.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.893 "is_configured": false, 00:16:31.893 "data_offset": 0, 00:16:31.893 "data_size": 0 00:16:31.893 } 00:16:31.893 ] 00:16:31.893 }' 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.893 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.462 [2024-10-21 10:01:08.825948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:32.462 [2024-10-21 10:01:08.826127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:32.462 [2024-10-21 10:01:08.826158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:32.462 [2024-10-21 10:01:08.826488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:32.462 [2024-10-21 10:01:08.835454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:32.462 [2024-10-21 10:01:08.835520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:16:32.462 [2024-10-21 10:01:08.835868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.462 BaseBdev4 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.462 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.462 [ 00:16:32.462 { 00:16:32.462 "name": "BaseBdev4", 00:16:32.462 "aliases": [ 00:16:32.462 "bfcf68a0-4f92-419c-9327-0407b072eba1" 00:16:32.462 ], 00:16:32.462 "product_name": "Malloc disk", 00:16:32.462 "block_size": 512, 00:16:32.462 "num_blocks": 65536, 00:16:32.462 "uuid": "bfcf68a0-4f92-419c-9327-0407b072eba1", 00:16:32.462 "assigned_rate_limits": { 00:16:32.462 "rw_ios_per_sec": 0, 00:16:32.462 "rw_mbytes_per_sec": 0, 00:16:32.462 "r_mbytes_per_sec": 0, 00:16:32.462 "w_mbytes_per_sec": 0 00:16:32.462 }, 00:16:32.463 "claimed": true, 00:16:32.463 "claim_type": "exclusive_write", 00:16:32.463 "zoned": false, 00:16:32.463 "supported_io_types": { 00:16:32.463 "read": true, 00:16:32.463 "write": true, 00:16:32.463 "unmap": true, 00:16:32.463 "flush": true, 00:16:32.463 "reset": true, 00:16:32.463 "nvme_admin": false, 00:16:32.463 "nvme_io": false, 00:16:32.463 "nvme_io_md": false, 00:16:32.463 "write_zeroes": true, 00:16:32.463 "zcopy": true, 00:16:32.463 "get_zone_info": false, 00:16:32.463 "zone_management": false, 00:16:32.463 "zone_append": false, 00:16:32.463 "compare": false, 00:16:32.463 "compare_and_write": false, 00:16:32.463 "abort": true, 00:16:32.463 "seek_hole": false, 00:16:32.463 "seek_data": false, 00:16:32.463 "copy": true, 00:16:32.463 "nvme_iov_md": false 00:16:32.463 }, 00:16:32.463 "memory_domains": [ 00:16:32.463 { 00:16:32.463 "dma_device_id": "system", 00:16:32.463 "dma_device_type": 1 00:16:32.463 }, 00:16:32.463 { 00:16:32.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.463 "dma_device_type": 2 00:16:32.463 } 00:16:32.463 ], 00:16:32.463 "driver_specific": {} 00:16:32.463 } 00:16:32.463 ] 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.463 "name": "Existed_Raid", 00:16:32.463 "uuid": "ff408851-d8e2-49ca-a18f-7d4141a1f63a", 00:16:32.463 "strip_size_kb": 64, 00:16:32.463 "state": "online", 00:16:32.463 "raid_level": "raid5f", 00:16:32.463 "superblock": false, 00:16:32.463 "num_base_bdevs": 4, 00:16:32.463 "num_base_bdevs_discovered": 4, 00:16:32.463 "num_base_bdevs_operational": 4, 00:16:32.463 "base_bdevs_list": [ 00:16:32.463 { 00:16:32.463 "name": "BaseBdev1", 00:16:32.463 "uuid": "dccd4454-8b23-4128-b868-495fa37a772a", 00:16:32.463 "is_configured": true, 00:16:32.463 "data_offset": 0, 00:16:32.463 "data_size": 65536 00:16:32.463 }, 00:16:32.463 { 00:16:32.463 "name": "BaseBdev2", 00:16:32.463 "uuid": "8a6a91fe-74df-45d9-8697-f2105eb1ec64", 00:16:32.463 "is_configured": true, 00:16:32.463 "data_offset": 0, 00:16:32.463 "data_size": 65536 00:16:32.463 }, 00:16:32.463 { 00:16:32.463 "name": "BaseBdev3", 00:16:32.463 "uuid": "68dd7363-d876-42a7-8fc7-3630110162b8", 00:16:32.463 "is_configured": true, 00:16:32.463 "data_offset": 0, 00:16:32.463 "data_size": 65536 00:16:32.463 }, 00:16:32.463 { 00:16:32.463 "name": "BaseBdev4", 00:16:32.463 "uuid": "bfcf68a0-4f92-419c-9327-0407b072eba1", 00:16:32.463 "is_configured": true, 00:16:32.463 "data_offset": 0, 00:16:32.463 "data_size": 65536 00:16:32.463 } 00:16:32.463 ] 00:16:32.463 }' 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.463 10:01:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.034 [2024-10-21 10:01:09.344774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.034 "name": "Existed_Raid", 00:16:33.034 "aliases": [ 00:16:33.034 "ff408851-d8e2-49ca-a18f-7d4141a1f63a" 00:16:33.034 ], 00:16:33.034 "product_name": "Raid Volume", 00:16:33.034 "block_size": 512, 00:16:33.034 "num_blocks": 196608, 00:16:33.034 "uuid": "ff408851-d8e2-49ca-a18f-7d4141a1f63a", 00:16:33.034 "assigned_rate_limits": { 00:16:33.034 "rw_ios_per_sec": 0, 00:16:33.034 "rw_mbytes_per_sec": 0, 00:16:33.034 "r_mbytes_per_sec": 0, 00:16:33.034 "w_mbytes_per_sec": 0 00:16:33.034 }, 00:16:33.034 "claimed": false, 00:16:33.034 "zoned": false, 00:16:33.034 "supported_io_types": { 00:16:33.034 "read": true, 00:16:33.034 "write": true, 00:16:33.034 "unmap": false, 00:16:33.034 "flush": false, 00:16:33.034 "reset": true, 00:16:33.034 "nvme_admin": false, 00:16:33.034 "nvme_io": false, 00:16:33.034 "nvme_io_md": false, 00:16:33.034 "write_zeroes": true, 00:16:33.034 "zcopy": false, 00:16:33.034 "get_zone_info": false, 00:16:33.034 "zone_management": false, 00:16:33.034 "zone_append": false, 00:16:33.034 "compare": false, 00:16:33.034 "compare_and_write": false, 00:16:33.034 "abort": false, 00:16:33.034 "seek_hole": false, 00:16:33.034 "seek_data": false, 00:16:33.034 "copy": false, 00:16:33.034 "nvme_iov_md": false 00:16:33.034 }, 00:16:33.034 "driver_specific": { 00:16:33.034 "raid": { 00:16:33.034 "uuid": "ff408851-d8e2-49ca-a18f-7d4141a1f63a", 00:16:33.034 "strip_size_kb": 64, 00:16:33.034 "state": "online", 00:16:33.034 "raid_level": "raid5f", 00:16:33.034 "superblock": false, 00:16:33.034 "num_base_bdevs": 4, 00:16:33.034 "num_base_bdevs_discovered": 4, 00:16:33.034 "num_base_bdevs_operational": 4, 00:16:33.034 "base_bdevs_list": [ 00:16:33.034 { 00:16:33.034 "name": "BaseBdev1", 00:16:33.034 "uuid": "dccd4454-8b23-4128-b868-495fa37a772a", 00:16:33.034 "is_configured": true, 00:16:33.034 "data_offset": 0, 00:16:33.034 "data_size": 65536 00:16:33.034 }, 00:16:33.034 { 00:16:33.034 "name": "BaseBdev2", 00:16:33.034 "uuid": "8a6a91fe-74df-45d9-8697-f2105eb1ec64", 00:16:33.034 "is_configured": true, 00:16:33.034 "data_offset": 0, 00:16:33.034 "data_size": 65536 00:16:33.034 }, 00:16:33.034 { 00:16:33.034 "name": "BaseBdev3", 00:16:33.034 "uuid": "68dd7363-d876-42a7-8fc7-3630110162b8", 00:16:33.034 "is_configured": true, 00:16:33.034 "data_offset": 0, 00:16:33.034 "data_size": 65536 00:16:33.034 }, 00:16:33.034 { 00:16:33.034 "name": "BaseBdev4", 00:16:33.034 "uuid": "bfcf68a0-4f92-419c-9327-0407b072eba1", 00:16:33.034 "is_configured": true, 00:16:33.034 "data_offset": 0, 00:16:33.034 "data_size": 65536 00:16:33.034 } 00:16:33.034 ] 00:16:33.034 } 00:16:33.034 } 00:16:33.034 }' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:33.034 BaseBdev2 00:16:33.034 BaseBdev3 00:16:33.034 BaseBdev4' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.034 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.295 [2024-10-21 10:01:09.659992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.295 "name": "Existed_Raid", 00:16:33.295 "uuid": "ff408851-d8e2-49ca-a18f-7d4141a1f63a", 00:16:33.295 "strip_size_kb": 64, 00:16:33.295 "state": "online", 00:16:33.295 "raid_level": "raid5f", 00:16:33.295 "superblock": false, 00:16:33.295 "num_base_bdevs": 4, 00:16:33.295 "num_base_bdevs_discovered": 3, 00:16:33.295 "num_base_bdevs_operational": 3, 00:16:33.295 "base_bdevs_list": [ 00:16:33.295 { 00:16:33.295 "name": null, 00:16:33.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.295 "is_configured": false, 00:16:33.295 "data_offset": 0, 00:16:33.295 "data_size": 65536 00:16:33.295 }, 00:16:33.295 { 00:16:33.295 "name": "BaseBdev2", 00:16:33.295 "uuid": "8a6a91fe-74df-45d9-8697-f2105eb1ec64", 00:16:33.295 "is_configured": true, 00:16:33.295 "data_offset": 0, 00:16:33.295 "data_size": 65536 00:16:33.295 }, 00:16:33.295 { 00:16:33.295 "name": "BaseBdev3", 00:16:33.295 "uuid": "68dd7363-d876-42a7-8fc7-3630110162b8", 00:16:33.295 "is_configured": true, 00:16:33.295 "data_offset": 0, 00:16:33.295 "data_size": 65536 00:16:33.295 }, 00:16:33.295 { 00:16:33.295 "name": "BaseBdev4", 00:16:33.295 "uuid": "bfcf68a0-4f92-419c-9327-0407b072eba1", 00:16:33.295 "is_configured": true, 00:16:33.295 "data_offset": 0, 00:16:33.295 "data_size": 65536 00:16:33.295 } 00:16:33.295 ] 00:16:33.295 }' 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.295 10:01:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 [2024-10-21 10:01:10.284636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.867 [2024-10-21 10:01:10.284757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.867 [2024-10-21 10:01:10.388829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.867 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.867 [2024-10-21 10:01:10.452755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.128 [2024-10-21 10:01:10.614303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:34.128 [2024-10-21 10:01:10.614360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:34.128 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.388 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.389 BaseBdev2 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.389 [ 00:16:34.389 { 00:16:34.389 "name": "BaseBdev2", 00:16:34.389 "aliases": [ 00:16:34.389 "0fa8604f-37c3-4905-9e78-cbaaca9523a5" 00:16:34.389 ], 00:16:34.389 "product_name": "Malloc disk", 00:16:34.389 "block_size": 512, 00:16:34.389 "num_blocks": 65536, 00:16:34.389 "uuid": "0fa8604f-37c3-4905-9e78-cbaaca9523a5", 00:16:34.389 "assigned_rate_limits": { 00:16:34.389 "rw_ios_per_sec": 0, 00:16:34.389 "rw_mbytes_per_sec": 0, 00:16:34.389 "r_mbytes_per_sec": 0, 00:16:34.389 "w_mbytes_per_sec": 0 00:16:34.389 }, 00:16:34.389 "claimed": false, 00:16:34.389 "zoned": false, 00:16:34.389 "supported_io_types": { 00:16:34.389 "read": true, 00:16:34.389 "write": true, 00:16:34.389 "unmap": true, 00:16:34.389 "flush": true, 00:16:34.389 "reset": true, 00:16:34.389 "nvme_admin": false, 00:16:34.389 "nvme_io": false, 00:16:34.389 "nvme_io_md": false, 00:16:34.389 "write_zeroes": true, 00:16:34.389 "zcopy": true, 00:16:34.389 "get_zone_info": false, 00:16:34.389 "zone_management": false, 00:16:34.389 "zone_append": false, 00:16:34.389 "compare": false, 00:16:34.389 "compare_and_write": false, 00:16:34.389 "abort": true, 00:16:34.389 "seek_hole": false, 00:16:34.389 "seek_data": false, 00:16:34.389 "copy": true, 00:16:34.389 "nvme_iov_md": false 00:16:34.389 }, 00:16:34.389 "memory_domains": [ 00:16:34.389 { 00:16:34.389 "dma_device_id": "system", 00:16:34.389 "dma_device_type": 1 00:16:34.389 }, 00:16:34.389 { 00:16:34.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.389 "dma_device_type": 2 00:16:34.389 } 00:16:34.389 ], 00:16:34.389 "driver_specific": {} 00:16:34.389 } 00:16:34.389 ] 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.389 BaseBdev3 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.389 [ 00:16:34.389 { 00:16:34.389 "name": "BaseBdev3", 00:16:34.389 "aliases": [ 00:16:34.389 "40f246fb-1b37-4449-99c9-ec6728d26a3c" 00:16:34.389 ], 00:16:34.389 "product_name": "Malloc disk", 00:16:34.389 "block_size": 512, 00:16:34.389 "num_blocks": 65536, 00:16:34.389 "uuid": "40f246fb-1b37-4449-99c9-ec6728d26a3c", 00:16:34.389 "assigned_rate_limits": { 00:16:34.389 "rw_ios_per_sec": 0, 00:16:34.389 "rw_mbytes_per_sec": 0, 00:16:34.389 "r_mbytes_per_sec": 0, 00:16:34.389 "w_mbytes_per_sec": 0 00:16:34.389 }, 00:16:34.389 "claimed": false, 00:16:34.389 "zoned": false, 00:16:34.389 "supported_io_types": { 00:16:34.389 "read": true, 00:16:34.389 "write": true, 00:16:34.389 "unmap": true, 00:16:34.389 "flush": true, 00:16:34.389 "reset": true, 00:16:34.389 "nvme_admin": false, 00:16:34.389 "nvme_io": false, 00:16:34.389 "nvme_io_md": false, 00:16:34.389 "write_zeroes": true, 00:16:34.389 "zcopy": true, 00:16:34.389 "get_zone_info": false, 00:16:34.389 "zone_management": false, 00:16:34.389 "zone_append": false, 00:16:34.389 "compare": false, 00:16:34.389 "compare_and_write": false, 00:16:34.389 "abort": true, 00:16:34.389 "seek_hole": false, 00:16:34.389 "seek_data": false, 00:16:34.389 "copy": true, 00:16:34.389 "nvme_iov_md": false 00:16:34.389 }, 00:16:34.389 "memory_domains": [ 00:16:34.389 { 00:16:34.389 "dma_device_id": "system", 00:16:34.389 "dma_device_type": 1 00:16:34.389 }, 00:16:34.389 { 00:16:34.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.389 "dma_device_type": 2 00:16:34.389 } 00:16:34.389 ], 00:16:34.389 "driver_specific": {} 00:16:34.389 } 00:16:34.389 ] 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.389 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.650 BaseBdev4 00:16:34.650 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.650 10:01:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:34.650 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:34.650 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:34.650 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:34.650 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:34.650 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:34.650 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:34.650 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.650 10:01:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.650 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.650 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:34.650 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.650 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.650 [ 00:16:34.650 { 00:16:34.650 "name": "BaseBdev4", 00:16:34.650 "aliases": [ 00:16:34.650 "8bd4d154-2f99-422d-aa2b-4e4f92966a4c" 00:16:34.650 ], 00:16:34.650 "product_name": "Malloc disk", 00:16:34.650 "block_size": 512, 00:16:34.650 "num_blocks": 65536, 00:16:34.650 "uuid": "8bd4d154-2f99-422d-aa2b-4e4f92966a4c", 00:16:34.650 "assigned_rate_limits": { 00:16:34.650 "rw_ios_per_sec": 0, 00:16:34.650 "rw_mbytes_per_sec": 0, 00:16:34.650 "r_mbytes_per_sec": 0, 00:16:34.650 "w_mbytes_per_sec": 0 00:16:34.650 }, 00:16:34.650 "claimed": false, 00:16:34.650 "zoned": false, 00:16:34.650 "supported_io_types": { 00:16:34.650 "read": true, 00:16:34.650 "write": true, 00:16:34.650 "unmap": true, 00:16:34.650 "flush": true, 00:16:34.650 "reset": true, 00:16:34.650 "nvme_admin": false, 00:16:34.650 "nvme_io": false, 00:16:34.650 "nvme_io_md": false, 00:16:34.650 "write_zeroes": true, 00:16:34.650 "zcopy": true, 00:16:34.650 "get_zone_info": false, 00:16:34.650 "zone_management": false, 00:16:34.650 "zone_append": false, 00:16:34.650 "compare": false, 00:16:34.650 "compare_and_write": false, 00:16:34.650 "abort": true, 00:16:34.650 "seek_hole": false, 00:16:34.650 "seek_data": false, 00:16:34.651 "copy": true, 00:16:34.651 "nvme_iov_md": false 00:16:34.651 }, 00:16:34.651 "memory_domains": [ 00:16:34.651 { 00:16:34.651 "dma_device_id": "system", 00:16:34.651 "dma_device_type": 1 00:16:34.651 }, 00:16:34.651 { 00:16:34.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.651 "dma_device_type": 2 00:16:34.651 } 00:16:34.651 ], 00:16:34.651 "driver_specific": {} 00:16:34.651 } 00:16:34.651 ] 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.651 [2024-10-21 10:01:11.043858] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.651 [2024-10-21 10:01:11.043908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.651 [2024-10-21 10:01:11.043931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.651 [2024-10-21 10:01:11.046042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.651 [2024-10-21 10:01:11.046099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.651 "name": "Existed_Raid", 00:16:34.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.651 "strip_size_kb": 64, 00:16:34.651 "state": "configuring", 00:16:34.651 "raid_level": "raid5f", 00:16:34.651 "superblock": false, 00:16:34.651 "num_base_bdevs": 4, 00:16:34.651 "num_base_bdevs_discovered": 3, 00:16:34.651 "num_base_bdevs_operational": 4, 00:16:34.651 "base_bdevs_list": [ 00:16:34.651 { 00:16:34.651 "name": "BaseBdev1", 00:16:34.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.651 "is_configured": false, 00:16:34.651 "data_offset": 0, 00:16:34.651 "data_size": 0 00:16:34.651 }, 00:16:34.651 { 00:16:34.651 "name": "BaseBdev2", 00:16:34.651 "uuid": "0fa8604f-37c3-4905-9e78-cbaaca9523a5", 00:16:34.651 "is_configured": true, 00:16:34.651 "data_offset": 0, 00:16:34.651 "data_size": 65536 00:16:34.651 }, 00:16:34.651 { 00:16:34.651 "name": "BaseBdev3", 00:16:34.651 "uuid": "40f246fb-1b37-4449-99c9-ec6728d26a3c", 00:16:34.651 "is_configured": true, 00:16:34.651 "data_offset": 0, 00:16:34.651 "data_size": 65536 00:16:34.651 }, 00:16:34.651 { 00:16:34.651 "name": "BaseBdev4", 00:16:34.651 "uuid": "8bd4d154-2f99-422d-aa2b-4e4f92966a4c", 00:16:34.651 "is_configured": true, 00:16:34.651 "data_offset": 0, 00:16:34.651 "data_size": 65536 00:16:34.651 } 00:16:34.651 ] 00:16:34.651 }' 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.651 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.911 [2024-10-21 10:01:11.483156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.911 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.171 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.171 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.171 "name": "Existed_Raid", 00:16:35.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.171 "strip_size_kb": 64, 00:16:35.171 "state": "configuring", 00:16:35.171 "raid_level": "raid5f", 00:16:35.171 "superblock": false, 00:16:35.171 "num_base_bdevs": 4, 00:16:35.171 "num_base_bdevs_discovered": 2, 00:16:35.171 "num_base_bdevs_operational": 4, 00:16:35.171 "base_bdevs_list": [ 00:16:35.171 { 00:16:35.171 "name": "BaseBdev1", 00:16:35.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.171 "is_configured": false, 00:16:35.171 "data_offset": 0, 00:16:35.171 "data_size": 0 00:16:35.171 }, 00:16:35.171 { 00:16:35.171 "name": null, 00:16:35.171 "uuid": "0fa8604f-37c3-4905-9e78-cbaaca9523a5", 00:16:35.171 "is_configured": false, 00:16:35.171 "data_offset": 0, 00:16:35.171 "data_size": 65536 00:16:35.171 }, 00:16:35.171 { 00:16:35.171 "name": "BaseBdev3", 00:16:35.171 "uuid": "40f246fb-1b37-4449-99c9-ec6728d26a3c", 00:16:35.171 "is_configured": true, 00:16:35.171 "data_offset": 0, 00:16:35.171 "data_size": 65536 00:16:35.171 }, 00:16:35.171 { 00:16:35.171 "name": "BaseBdev4", 00:16:35.171 "uuid": "8bd4d154-2f99-422d-aa2b-4e4f92966a4c", 00:16:35.171 "is_configured": true, 00:16:35.171 "data_offset": 0, 00:16:35.171 "data_size": 65536 00:16:35.171 } 00:16:35.171 ] 00:16:35.171 }' 00:16:35.171 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.171 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.431 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.431 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.431 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.431 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:35.431 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.431 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:35.431 10:01:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.431 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.431 10:01:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.691 [2024-10-21 10:01:12.031245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.691 BaseBdev1 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.691 [ 00:16:35.691 { 00:16:35.691 "name": "BaseBdev1", 00:16:35.691 "aliases": [ 00:16:35.691 "4f679435-a26f-41d7-b577-4195b77f42f5" 00:16:35.691 ], 00:16:35.691 "product_name": "Malloc disk", 00:16:35.691 "block_size": 512, 00:16:35.691 "num_blocks": 65536, 00:16:35.691 "uuid": "4f679435-a26f-41d7-b577-4195b77f42f5", 00:16:35.691 "assigned_rate_limits": { 00:16:35.691 "rw_ios_per_sec": 0, 00:16:35.691 "rw_mbytes_per_sec": 0, 00:16:35.691 "r_mbytes_per_sec": 0, 00:16:35.691 "w_mbytes_per_sec": 0 00:16:35.691 }, 00:16:35.691 "claimed": true, 00:16:35.691 "claim_type": "exclusive_write", 00:16:35.691 "zoned": false, 00:16:35.691 "supported_io_types": { 00:16:35.691 "read": true, 00:16:35.691 "write": true, 00:16:35.691 "unmap": true, 00:16:35.691 "flush": true, 00:16:35.691 "reset": true, 00:16:35.691 "nvme_admin": false, 00:16:35.691 "nvme_io": false, 00:16:35.691 "nvme_io_md": false, 00:16:35.691 "write_zeroes": true, 00:16:35.691 "zcopy": true, 00:16:35.691 "get_zone_info": false, 00:16:35.691 "zone_management": false, 00:16:35.691 "zone_append": false, 00:16:35.691 "compare": false, 00:16:35.691 "compare_and_write": false, 00:16:35.691 "abort": true, 00:16:35.691 "seek_hole": false, 00:16:35.691 "seek_data": false, 00:16:35.691 "copy": true, 00:16:35.691 "nvme_iov_md": false 00:16:35.691 }, 00:16:35.691 "memory_domains": [ 00:16:35.691 { 00:16:35.691 "dma_device_id": "system", 00:16:35.691 "dma_device_type": 1 00:16:35.691 }, 00:16:35.691 { 00:16:35.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.691 "dma_device_type": 2 00:16:35.691 } 00:16:35.691 ], 00:16:35.691 "driver_specific": {} 00:16:35.691 } 00:16:35.691 ] 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.691 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.691 "name": "Existed_Raid", 00:16:35.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.692 "strip_size_kb": 64, 00:16:35.692 "state": "configuring", 00:16:35.692 "raid_level": "raid5f", 00:16:35.692 "superblock": false, 00:16:35.692 "num_base_bdevs": 4, 00:16:35.692 "num_base_bdevs_discovered": 3, 00:16:35.692 "num_base_bdevs_operational": 4, 00:16:35.692 "base_bdevs_list": [ 00:16:35.692 { 00:16:35.692 "name": "BaseBdev1", 00:16:35.692 "uuid": "4f679435-a26f-41d7-b577-4195b77f42f5", 00:16:35.692 "is_configured": true, 00:16:35.692 "data_offset": 0, 00:16:35.692 "data_size": 65536 00:16:35.692 }, 00:16:35.692 { 00:16:35.692 "name": null, 00:16:35.692 "uuid": "0fa8604f-37c3-4905-9e78-cbaaca9523a5", 00:16:35.692 "is_configured": false, 00:16:35.692 "data_offset": 0, 00:16:35.692 "data_size": 65536 00:16:35.692 }, 00:16:35.692 { 00:16:35.692 "name": "BaseBdev3", 00:16:35.692 "uuid": "40f246fb-1b37-4449-99c9-ec6728d26a3c", 00:16:35.692 "is_configured": true, 00:16:35.692 "data_offset": 0, 00:16:35.692 "data_size": 65536 00:16:35.692 }, 00:16:35.692 { 00:16:35.692 "name": "BaseBdev4", 00:16:35.692 "uuid": "8bd4d154-2f99-422d-aa2b-4e4f92966a4c", 00:16:35.692 "is_configured": true, 00:16:35.692 "data_offset": 0, 00:16:35.692 "data_size": 65536 00:16:35.692 } 00:16:35.692 ] 00:16:35.692 }' 00:16:35.692 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.692 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.953 [2024-10-21 10:01:12.518498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.953 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.213 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.213 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.213 "name": "Existed_Raid", 00:16:36.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.213 "strip_size_kb": 64, 00:16:36.213 "state": "configuring", 00:16:36.213 "raid_level": "raid5f", 00:16:36.213 "superblock": false, 00:16:36.213 "num_base_bdevs": 4, 00:16:36.213 "num_base_bdevs_discovered": 2, 00:16:36.213 "num_base_bdevs_operational": 4, 00:16:36.213 "base_bdevs_list": [ 00:16:36.213 { 00:16:36.213 "name": "BaseBdev1", 00:16:36.213 "uuid": "4f679435-a26f-41d7-b577-4195b77f42f5", 00:16:36.213 "is_configured": true, 00:16:36.213 "data_offset": 0, 00:16:36.213 "data_size": 65536 00:16:36.213 }, 00:16:36.213 { 00:16:36.213 "name": null, 00:16:36.213 "uuid": "0fa8604f-37c3-4905-9e78-cbaaca9523a5", 00:16:36.213 "is_configured": false, 00:16:36.213 "data_offset": 0, 00:16:36.213 "data_size": 65536 00:16:36.213 }, 00:16:36.213 { 00:16:36.213 "name": null, 00:16:36.213 "uuid": "40f246fb-1b37-4449-99c9-ec6728d26a3c", 00:16:36.213 "is_configured": false, 00:16:36.213 "data_offset": 0, 00:16:36.213 "data_size": 65536 00:16:36.213 }, 00:16:36.213 { 00:16:36.213 "name": "BaseBdev4", 00:16:36.213 "uuid": "8bd4d154-2f99-422d-aa2b-4e4f92966a4c", 00:16:36.213 "is_configured": true, 00:16:36.213 "data_offset": 0, 00:16:36.213 "data_size": 65536 00:16:36.213 } 00:16:36.213 ] 00:16:36.213 }' 00:16:36.213 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.213 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.474 [2024-10-21 10:01:12.949803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.474 "name": "Existed_Raid", 00:16:36.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.474 "strip_size_kb": 64, 00:16:36.474 "state": "configuring", 00:16:36.474 "raid_level": "raid5f", 00:16:36.474 "superblock": false, 00:16:36.474 "num_base_bdevs": 4, 00:16:36.474 "num_base_bdevs_discovered": 3, 00:16:36.474 "num_base_bdevs_operational": 4, 00:16:36.474 "base_bdevs_list": [ 00:16:36.474 { 00:16:36.474 "name": "BaseBdev1", 00:16:36.474 "uuid": "4f679435-a26f-41d7-b577-4195b77f42f5", 00:16:36.474 "is_configured": true, 00:16:36.474 "data_offset": 0, 00:16:36.474 "data_size": 65536 00:16:36.474 }, 00:16:36.474 { 00:16:36.474 "name": null, 00:16:36.474 "uuid": "0fa8604f-37c3-4905-9e78-cbaaca9523a5", 00:16:36.474 "is_configured": false, 00:16:36.474 "data_offset": 0, 00:16:36.474 "data_size": 65536 00:16:36.474 }, 00:16:36.474 { 00:16:36.474 "name": "BaseBdev3", 00:16:36.474 "uuid": "40f246fb-1b37-4449-99c9-ec6728d26a3c", 00:16:36.474 "is_configured": true, 00:16:36.474 "data_offset": 0, 00:16:36.474 "data_size": 65536 00:16:36.474 }, 00:16:36.474 { 00:16:36.474 "name": "BaseBdev4", 00:16:36.474 "uuid": "8bd4d154-2f99-422d-aa2b-4e4f92966a4c", 00:16:36.474 "is_configured": true, 00:16:36.474 "data_offset": 0, 00:16:36.474 "data_size": 65536 00:16:36.474 } 00:16:36.474 ] 00:16:36.474 }' 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.474 10:01:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.045 [2024-10-21 10:01:13.397077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.045 "name": "Existed_Raid", 00:16:37.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.045 "strip_size_kb": 64, 00:16:37.045 "state": "configuring", 00:16:37.045 "raid_level": "raid5f", 00:16:37.045 "superblock": false, 00:16:37.045 "num_base_bdevs": 4, 00:16:37.045 "num_base_bdevs_discovered": 2, 00:16:37.045 "num_base_bdevs_operational": 4, 00:16:37.045 "base_bdevs_list": [ 00:16:37.045 { 00:16:37.045 "name": null, 00:16:37.045 "uuid": "4f679435-a26f-41d7-b577-4195b77f42f5", 00:16:37.045 "is_configured": false, 00:16:37.045 "data_offset": 0, 00:16:37.045 "data_size": 65536 00:16:37.045 }, 00:16:37.045 { 00:16:37.045 "name": null, 00:16:37.045 "uuid": "0fa8604f-37c3-4905-9e78-cbaaca9523a5", 00:16:37.045 "is_configured": false, 00:16:37.045 "data_offset": 0, 00:16:37.045 "data_size": 65536 00:16:37.045 }, 00:16:37.045 { 00:16:37.045 "name": "BaseBdev3", 00:16:37.045 "uuid": "40f246fb-1b37-4449-99c9-ec6728d26a3c", 00:16:37.045 "is_configured": true, 00:16:37.045 "data_offset": 0, 00:16:37.045 "data_size": 65536 00:16:37.045 }, 00:16:37.045 { 00:16:37.045 "name": "BaseBdev4", 00:16:37.045 "uuid": "8bd4d154-2f99-422d-aa2b-4e4f92966a4c", 00:16:37.045 "is_configured": true, 00:16:37.045 "data_offset": 0, 00:16:37.045 "data_size": 65536 00:16:37.045 } 00:16:37.045 ] 00:16:37.045 }' 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.045 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.614 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:37.614 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.614 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.614 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.615 [2024-10-21 10:01:13.962554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.615 10:01:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.615 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.615 "name": "Existed_Raid", 00:16:37.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.615 "strip_size_kb": 64, 00:16:37.615 "state": "configuring", 00:16:37.615 "raid_level": "raid5f", 00:16:37.615 "superblock": false, 00:16:37.615 "num_base_bdevs": 4, 00:16:37.615 "num_base_bdevs_discovered": 3, 00:16:37.615 "num_base_bdevs_operational": 4, 00:16:37.615 "base_bdevs_list": [ 00:16:37.615 { 00:16:37.615 "name": null, 00:16:37.615 "uuid": "4f679435-a26f-41d7-b577-4195b77f42f5", 00:16:37.615 "is_configured": false, 00:16:37.615 "data_offset": 0, 00:16:37.615 "data_size": 65536 00:16:37.615 }, 00:16:37.615 { 00:16:37.615 "name": "BaseBdev2", 00:16:37.615 "uuid": "0fa8604f-37c3-4905-9e78-cbaaca9523a5", 00:16:37.615 "is_configured": true, 00:16:37.615 "data_offset": 0, 00:16:37.615 "data_size": 65536 00:16:37.615 }, 00:16:37.615 { 00:16:37.615 "name": "BaseBdev3", 00:16:37.615 "uuid": "40f246fb-1b37-4449-99c9-ec6728d26a3c", 00:16:37.615 "is_configured": true, 00:16:37.615 "data_offset": 0, 00:16:37.615 "data_size": 65536 00:16:37.615 }, 00:16:37.615 { 00:16:37.615 "name": "BaseBdev4", 00:16:37.615 "uuid": "8bd4d154-2f99-422d-aa2b-4e4f92966a4c", 00:16:37.615 "is_configured": true, 00:16:37.615 "data_offset": 0, 00:16:37.615 "data_size": 65536 00:16:37.615 } 00:16:37.615 ] 00:16:37.615 }' 00:16:37.615 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.615 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.875 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.875 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.875 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.875 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:37.875 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.875 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:37.875 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.875 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:37.875 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.875 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.214 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4f679435-a26f-41d7-b577-4195b77f42f5 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.215 [2024-10-21 10:01:14.563526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:38.215 [2024-10-21 10:01:14.563708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:38.215 [2024-10-21 10:01:14.563736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:38.215 [2024-10-21 10:01:14.564080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:38.215 [2024-10-21 10:01:14.571696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:38.215 [2024-10-21 10:01:14.571759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:16:38.215 [2024-10-21 10:01:14.572094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.215 NewBaseBdev 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.215 [ 00:16:38.215 { 00:16:38.215 "name": "NewBaseBdev", 00:16:38.215 "aliases": [ 00:16:38.215 "4f679435-a26f-41d7-b577-4195b77f42f5" 00:16:38.215 ], 00:16:38.215 "product_name": "Malloc disk", 00:16:38.215 "block_size": 512, 00:16:38.215 "num_blocks": 65536, 00:16:38.215 "uuid": "4f679435-a26f-41d7-b577-4195b77f42f5", 00:16:38.215 "assigned_rate_limits": { 00:16:38.215 "rw_ios_per_sec": 0, 00:16:38.215 "rw_mbytes_per_sec": 0, 00:16:38.215 "r_mbytes_per_sec": 0, 00:16:38.215 "w_mbytes_per_sec": 0 00:16:38.215 }, 00:16:38.215 "claimed": true, 00:16:38.215 "claim_type": "exclusive_write", 00:16:38.215 "zoned": false, 00:16:38.215 "supported_io_types": { 00:16:38.215 "read": true, 00:16:38.215 "write": true, 00:16:38.215 "unmap": true, 00:16:38.215 "flush": true, 00:16:38.215 "reset": true, 00:16:38.215 "nvme_admin": false, 00:16:38.215 "nvme_io": false, 00:16:38.215 "nvme_io_md": false, 00:16:38.215 "write_zeroes": true, 00:16:38.215 "zcopy": true, 00:16:38.215 "get_zone_info": false, 00:16:38.215 "zone_management": false, 00:16:38.215 "zone_append": false, 00:16:38.215 "compare": false, 00:16:38.215 "compare_and_write": false, 00:16:38.215 "abort": true, 00:16:38.215 "seek_hole": false, 00:16:38.215 "seek_data": false, 00:16:38.215 "copy": true, 00:16:38.215 "nvme_iov_md": false 00:16:38.215 }, 00:16:38.215 "memory_domains": [ 00:16:38.215 { 00:16:38.215 "dma_device_id": "system", 00:16:38.215 "dma_device_type": 1 00:16:38.215 }, 00:16:38.215 { 00:16:38.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.215 "dma_device_type": 2 00:16:38.215 } 00:16:38.215 ], 00:16:38.215 "driver_specific": {} 00:16:38.215 } 00:16:38.215 ] 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.215 "name": "Existed_Raid", 00:16:38.215 "uuid": "7a3643b5-8acd-4385-ae76-262d7735cf8d", 00:16:38.215 "strip_size_kb": 64, 00:16:38.215 "state": "online", 00:16:38.215 "raid_level": "raid5f", 00:16:38.215 "superblock": false, 00:16:38.215 "num_base_bdevs": 4, 00:16:38.215 "num_base_bdevs_discovered": 4, 00:16:38.215 "num_base_bdevs_operational": 4, 00:16:38.215 "base_bdevs_list": [ 00:16:38.215 { 00:16:38.215 "name": "NewBaseBdev", 00:16:38.215 "uuid": "4f679435-a26f-41d7-b577-4195b77f42f5", 00:16:38.215 "is_configured": true, 00:16:38.215 "data_offset": 0, 00:16:38.215 "data_size": 65536 00:16:38.215 }, 00:16:38.215 { 00:16:38.215 "name": "BaseBdev2", 00:16:38.215 "uuid": "0fa8604f-37c3-4905-9e78-cbaaca9523a5", 00:16:38.215 "is_configured": true, 00:16:38.215 "data_offset": 0, 00:16:38.215 "data_size": 65536 00:16:38.215 }, 00:16:38.215 { 00:16:38.215 "name": "BaseBdev3", 00:16:38.215 "uuid": "40f246fb-1b37-4449-99c9-ec6728d26a3c", 00:16:38.215 "is_configured": true, 00:16:38.215 "data_offset": 0, 00:16:38.215 "data_size": 65536 00:16:38.215 }, 00:16:38.215 { 00:16:38.215 "name": "BaseBdev4", 00:16:38.215 "uuid": "8bd4d154-2f99-422d-aa2b-4e4f92966a4c", 00:16:38.215 "is_configured": true, 00:16:38.215 "data_offset": 0, 00:16:38.215 "data_size": 65536 00:16:38.215 } 00:16:38.215 ] 00:16:38.215 }' 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.215 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.499 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.499 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:38.499 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.499 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.499 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.499 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.499 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.499 10:01:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:38.499 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.499 10:01:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.499 [2024-10-21 10:01:14.996652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.499 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.499 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.499 "name": "Existed_Raid", 00:16:38.499 "aliases": [ 00:16:38.499 "7a3643b5-8acd-4385-ae76-262d7735cf8d" 00:16:38.499 ], 00:16:38.499 "product_name": "Raid Volume", 00:16:38.499 "block_size": 512, 00:16:38.499 "num_blocks": 196608, 00:16:38.499 "uuid": "7a3643b5-8acd-4385-ae76-262d7735cf8d", 00:16:38.499 "assigned_rate_limits": { 00:16:38.499 "rw_ios_per_sec": 0, 00:16:38.499 "rw_mbytes_per_sec": 0, 00:16:38.499 "r_mbytes_per_sec": 0, 00:16:38.499 "w_mbytes_per_sec": 0 00:16:38.499 }, 00:16:38.499 "claimed": false, 00:16:38.499 "zoned": false, 00:16:38.499 "supported_io_types": { 00:16:38.499 "read": true, 00:16:38.499 "write": true, 00:16:38.499 "unmap": false, 00:16:38.499 "flush": false, 00:16:38.499 "reset": true, 00:16:38.499 "nvme_admin": false, 00:16:38.499 "nvme_io": false, 00:16:38.499 "nvme_io_md": false, 00:16:38.499 "write_zeroes": true, 00:16:38.499 "zcopy": false, 00:16:38.499 "get_zone_info": false, 00:16:38.499 "zone_management": false, 00:16:38.499 "zone_append": false, 00:16:38.499 "compare": false, 00:16:38.499 "compare_and_write": false, 00:16:38.499 "abort": false, 00:16:38.499 "seek_hole": false, 00:16:38.499 "seek_data": false, 00:16:38.499 "copy": false, 00:16:38.499 "nvme_iov_md": false 00:16:38.499 }, 00:16:38.499 "driver_specific": { 00:16:38.499 "raid": { 00:16:38.499 "uuid": "7a3643b5-8acd-4385-ae76-262d7735cf8d", 00:16:38.499 "strip_size_kb": 64, 00:16:38.499 "state": "online", 00:16:38.499 "raid_level": "raid5f", 00:16:38.499 "superblock": false, 00:16:38.499 "num_base_bdevs": 4, 00:16:38.499 "num_base_bdevs_discovered": 4, 00:16:38.499 "num_base_bdevs_operational": 4, 00:16:38.499 "base_bdevs_list": [ 00:16:38.499 { 00:16:38.499 "name": "NewBaseBdev", 00:16:38.499 "uuid": "4f679435-a26f-41d7-b577-4195b77f42f5", 00:16:38.499 "is_configured": true, 00:16:38.499 "data_offset": 0, 00:16:38.499 "data_size": 65536 00:16:38.499 }, 00:16:38.499 { 00:16:38.499 "name": "BaseBdev2", 00:16:38.499 "uuid": "0fa8604f-37c3-4905-9e78-cbaaca9523a5", 00:16:38.499 "is_configured": true, 00:16:38.499 "data_offset": 0, 00:16:38.499 "data_size": 65536 00:16:38.499 }, 00:16:38.499 { 00:16:38.499 "name": "BaseBdev3", 00:16:38.499 "uuid": "40f246fb-1b37-4449-99c9-ec6728d26a3c", 00:16:38.499 "is_configured": true, 00:16:38.499 "data_offset": 0, 00:16:38.499 "data_size": 65536 00:16:38.499 }, 00:16:38.499 { 00:16:38.499 "name": "BaseBdev4", 00:16:38.499 "uuid": "8bd4d154-2f99-422d-aa2b-4e4f92966a4c", 00:16:38.499 "is_configured": true, 00:16:38.499 "data_offset": 0, 00:16:38.499 "data_size": 65536 00:16:38.499 } 00:16:38.499 ] 00:16:38.499 } 00:16:38.499 } 00:16:38.499 }' 00:16:38.499 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.499 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:38.499 BaseBdev2 00:16:38.499 BaseBdev3 00:16:38.499 BaseBdev4' 00:16:38.499 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.759 [2024-10-21 10:01:15.311840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.759 [2024-10-21 10:01:15.311923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.759 [2024-10-21 10:01:15.312030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.759 [2024-10-21 10:01:15.312383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.759 [2024-10-21 10:01:15.312395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82424 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82424 ']' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82424 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:38.759 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82424 00:16:39.019 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:39.019 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:39.019 killing process with pid 82424 00:16:39.019 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82424' 00:16:39.019 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 82424 00:16:39.019 [2024-10-21 10:01:15.361428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.019 10:01:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 82424 00:16:39.279 [2024-10-21 10:01:15.800631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:40.660 ************************************ 00:16:40.660 END TEST raid5f_state_function_test 00:16:40.660 ************************************ 00:16:40.660 00:16:40.660 real 0m11.760s 00:16:40.660 user 0m18.297s 00:16:40.660 sys 0m2.338s 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.660 10:01:17 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:40.660 10:01:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:40.660 10:01:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:40.660 10:01:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.660 ************************************ 00:16:40.660 START TEST raid5f_state_function_test_sb 00:16:40.660 ************************************ 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83096 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83096' 00:16:40.660 Process raid pid: 83096 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83096 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83096 ']' 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.660 10:01:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.660 [2024-10-21 10:01:17.204443] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:16:40.660 [2024-10-21 10:01:17.204654] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.918 [2024-10-21 10:01:17.367005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.177 [2024-10-21 10:01:17.515749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.436 [2024-10-21 10:01:17.774829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.436 [2024-10-21 10:01:17.774982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.696 [2024-10-21 10:01:18.053538] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.696 [2024-10-21 10:01:18.053615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.696 [2024-10-21 10:01:18.053627] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.696 [2024-10-21 10:01:18.053638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.696 [2024-10-21 10:01:18.053645] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.696 [2024-10-21 10:01:18.053654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.696 [2024-10-21 10:01:18.053660] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:41.696 [2024-10-21 10:01:18.053669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.696 "name": "Existed_Raid", 00:16:41.696 "uuid": "294b87c0-5703-4c86-8dcd-c5750e2b4fbd", 00:16:41.696 "strip_size_kb": 64, 00:16:41.696 "state": "configuring", 00:16:41.696 "raid_level": "raid5f", 00:16:41.696 "superblock": true, 00:16:41.696 "num_base_bdevs": 4, 00:16:41.696 "num_base_bdevs_discovered": 0, 00:16:41.696 "num_base_bdevs_operational": 4, 00:16:41.696 "base_bdevs_list": [ 00:16:41.696 { 00:16:41.696 "name": "BaseBdev1", 00:16:41.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.696 "is_configured": false, 00:16:41.696 "data_offset": 0, 00:16:41.696 "data_size": 0 00:16:41.696 }, 00:16:41.696 { 00:16:41.696 "name": "BaseBdev2", 00:16:41.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.696 "is_configured": false, 00:16:41.696 "data_offset": 0, 00:16:41.696 "data_size": 0 00:16:41.696 }, 00:16:41.696 { 00:16:41.696 "name": "BaseBdev3", 00:16:41.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.696 "is_configured": false, 00:16:41.696 "data_offset": 0, 00:16:41.696 "data_size": 0 00:16:41.696 }, 00:16:41.696 { 00:16:41.696 "name": "BaseBdev4", 00:16:41.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.696 "is_configured": false, 00:16:41.696 "data_offset": 0, 00:16:41.696 "data_size": 0 00:16:41.696 } 00:16:41.696 ] 00:16:41.696 }' 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.696 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.956 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:41.956 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.956 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.956 [2024-10-21 10:01:18.484744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:41.956 [2024-10-21 10:01:18.484843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:16:41.957 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.957 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:41.957 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.957 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.957 [2024-10-21 10:01:18.496760] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.957 [2024-10-21 10:01:18.496847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.957 [2024-10-21 10:01:18.496875] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.957 [2024-10-21 10:01:18.496899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.957 [2024-10-21 10:01:18.496917] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.957 [2024-10-21 10:01:18.496939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.957 [2024-10-21 10:01:18.496957] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:41.957 [2024-10-21 10:01:18.496978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:41.957 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.957 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:41.957 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.957 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.217 [2024-10-21 10:01:18.557267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.217 BaseBdev1 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.217 [ 00:16:42.217 { 00:16:42.217 "name": "BaseBdev1", 00:16:42.217 "aliases": [ 00:16:42.217 "fa4da0c1-cd75-43e8-bb1a-f05a4dfbcc8f" 00:16:42.217 ], 00:16:42.217 "product_name": "Malloc disk", 00:16:42.217 "block_size": 512, 00:16:42.217 "num_blocks": 65536, 00:16:42.217 "uuid": "fa4da0c1-cd75-43e8-bb1a-f05a4dfbcc8f", 00:16:42.217 "assigned_rate_limits": { 00:16:42.217 "rw_ios_per_sec": 0, 00:16:42.217 "rw_mbytes_per_sec": 0, 00:16:42.217 "r_mbytes_per_sec": 0, 00:16:42.217 "w_mbytes_per_sec": 0 00:16:42.217 }, 00:16:42.217 "claimed": true, 00:16:42.217 "claim_type": "exclusive_write", 00:16:42.217 "zoned": false, 00:16:42.217 "supported_io_types": { 00:16:42.217 "read": true, 00:16:42.217 "write": true, 00:16:42.217 "unmap": true, 00:16:42.217 "flush": true, 00:16:42.217 "reset": true, 00:16:42.217 "nvme_admin": false, 00:16:42.217 "nvme_io": false, 00:16:42.217 "nvme_io_md": false, 00:16:42.217 "write_zeroes": true, 00:16:42.217 "zcopy": true, 00:16:42.217 "get_zone_info": false, 00:16:42.217 "zone_management": false, 00:16:42.217 "zone_append": false, 00:16:42.217 "compare": false, 00:16:42.217 "compare_and_write": false, 00:16:42.217 "abort": true, 00:16:42.217 "seek_hole": false, 00:16:42.217 "seek_data": false, 00:16:42.217 "copy": true, 00:16:42.217 "nvme_iov_md": false 00:16:42.217 }, 00:16:42.217 "memory_domains": [ 00:16:42.217 { 00:16:42.217 "dma_device_id": "system", 00:16:42.217 "dma_device_type": 1 00:16:42.217 }, 00:16:42.217 { 00:16:42.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.217 "dma_device_type": 2 00:16:42.217 } 00:16:42.217 ], 00:16:42.217 "driver_specific": {} 00:16:42.217 } 00:16:42.217 ] 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.217 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.218 "name": "Existed_Raid", 00:16:42.218 "uuid": "c5abc6ac-df04-4b7d-aa04-5e5fb9a93dc5", 00:16:42.218 "strip_size_kb": 64, 00:16:42.218 "state": "configuring", 00:16:42.218 "raid_level": "raid5f", 00:16:42.218 "superblock": true, 00:16:42.218 "num_base_bdevs": 4, 00:16:42.218 "num_base_bdevs_discovered": 1, 00:16:42.218 "num_base_bdevs_operational": 4, 00:16:42.218 "base_bdevs_list": [ 00:16:42.218 { 00:16:42.218 "name": "BaseBdev1", 00:16:42.218 "uuid": "fa4da0c1-cd75-43e8-bb1a-f05a4dfbcc8f", 00:16:42.218 "is_configured": true, 00:16:42.218 "data_offset": 2048, 00:16:42.218 "data_size": 63488 00:16:42.218 }, 00:16:42.218 { 00:16:42.218 "name": "BaseBdev2", 00:16:42.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.218 "is_configured": false, 00:16:42.218 "data_offset": 0, 00:16:42.218 "data_size": 0 00:16:42.218 }, 00:16:42.218 { 00:16:42.218 "name": "BaseBdev3", 00:16:42.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.218 "is_configured": false, 00:16:42.218 "data_offset": 0, 00:16:42.218 "data_size": 0 00:16:42.218 }, 00:16:42.218 { 00:16:42.218 "name": "BaseBdev4", 00:16:42.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.218 "is_configured": false, 00:16:42.218 "data_offset": 0, 00:16:42.218 "data_size": 0 00:16:42.218 } 00:16:42.218 ] 00:16:42.218 }' 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.218 10:01:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.477 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.477 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.477 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.477 [2024-10-21 10:01:19.068437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.477 [2024-10-21 10:01:19.068500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.736 [2024-10-21 10:01:19.080476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.736 [2024-10-21 10:01:19.082624] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.736 [2024-10-21 10:01:19.082666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.736 [2024-10-21 10:01:19.082675] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.736 [2024-10-21 10:01:19.082685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.736 [2024-10-21 10:01:19.082693] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:42.736 [2024-10-21 10:01:19.082701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.736 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.736 "name": "Existed_Raid", 00:16:42.736 "uuid": "f2670427-58a6-420a-96c2-1e025cad62c7", 00:16:42.737 "strip_size_kb": 64, 00:16:42.737 "state": "configuring", 00:16:42.737 "raid_level": "raid5f", 00:16:42.737 "superblock": true, 00:16:42.737 "num_base_bdevs": 4, 00:16:42.737 "num_base_bdevs_discovered": 1, 00:16:42.737 "num_base_bdevs_operational": 4, 00:16:42.737 "base_bdevs_list": [ 00:16:42.737 { 00:16:42.737 "name": "BaseBdev1", 00:16:42.737 "uuid": "fa4da0c1-cd75-43e8-bb1a-f05a4dfbcc8f", 00:16:42.737 "is_configured": true, 00:16:42.737 "data_offset": 2048, 00:16:42.737 "data_size": 63488 00:16:42.737 }, 00:16:42.737 { 00:16:42.737 "name": "BaseBdev2", 00:16:42.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.737 "is_configured": false, 00:16:42.737 "data_offset": 0, 00:16:42.737 "data_size": 0 00:16:42.737 }, 00:16:42.737 { 00:16:42.737 "name": "BaseBdev3", 00:16:42.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.737 "is_configured": false, 00:16:42.737 "data_offset": 0, 00:16:42.737 "data_size": 0 00:16:42.737 }, 00:16:42.737 { 00:16:42.737 "name": "BaseBdev4", 00:16:42.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.737 "is_configured": false, 00:16:42.737 "data_offset": 0, 00:16:42.737 "data_size": 0 00:16:42.737 } 00:16:42.737 ] 00:16:42.737 }' 00:16:42.737 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.737 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.996 [2024-10-21 10:01:19.551192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.996 BaseBdev2 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.996 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.996 [ 00:16:42.996 { 00:16:42.996 "name": "BaseBdev2", 00:16:42.996 "aliases": [ 00:16:42.996 "70752daa-0c5f-491e-970a-f99c06018eaa" 00:16:42.996 ], 00:16:42.996 "product_name": "Malloc disk", 00:16:42.996 "block_size": 512, 00:16:42.996 "num_blocks": 65536, 00:16:42.996 "uuid": "70752daa-0c5f-491e-970a-f99c06018eaa", 00:16:42.996 "assigned_rate_limits": { 00:16:42.996 "rw_ios_per_sec": 0, 00:16:42.996 "rw_mbytes_per_sec": 0, 00:16:42.996 "r_mbytes_per_sec": 0, 00:16:42.996 "w_mbytes_per_sec": 0 00:16:42.996 }, 00:16:42.996 "claimed": true, 00:16:42.996 "claim_type": "exclusive_write", 00:16:42.996 "zoned": false, 00:16:42.996 "supported_io_types": { 00:16:42.996 "read": true, 00:16:42.996 "write": true, 00:16:42.996 "unmap": true, 00:16:42.996 "flush": true, 00:16:42.996 "reset": true, 00:16:42.996 "nvme_admin": false, 00:16:42.996 "nvme_io": false, 00:16:42.996 "nvme_io_md": false, 00:16:42.996 "write_zeroes": true, 00:16:42.996 "zcopy": true, 00:16:42.996 "get_zone_info": false, 00:16:42.996 "zone_management": false, 00:16:42.996 "zone_append": false, 00:16:42.996 "compare": false, 00:16:42.996 "compare_and_write": false, 00:16:42.996 "abort": true, 00:16:42.996 "seek_hole": false, 00:16:42.996 "seek_data": false, 00:16:42.996 "copy": true, 00:16:42.996 "nvme_iov_md": false 00:16:42.996 }, 00:16:43.257 "memory_domains": [ 00:16:43.257 { 00:16:43.257 "dma_device_id": "system", 00:16:43.257 "dma_device_type": 1 00:16:43.257 }, 00:16:43.257 { 00:16:43.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.257 "dma_device_type": 2 00:16:43.257 } 00:16:43.257 ], 00:16:43.257 "driver_specific": {} 00:16:43.257 } 00:16:43.257 ] 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.257 "name": "Existed_Raid", 00:16:43.257 "uuid": "f2670427-58a6-420a-96c2-1e025cad62c7", 00:16:43.257 "strip_size_kb": 64, 00:16:43.257 "state": "configuring", 00:16:43.257 "raid_level": "raid5f", 00:16:43.257 "superblock": true, 00:16:43.257 "num_base_bdevs": 4, 00:16:43.257 "num_base_bdevs_discovered": 2, 00:16:43.257 "num_base_bdevs_operational": 4, 00:16:43.257 "base_bdevs_list": [ 00:16:43.257 { 00:16:43.257 "name": "BaseBdev1", 00:16:43.257 "uuid": "fa4da0c1-cd75-43e8-bb1a-f05a4dfbcc8f", 00:16:43.257 "is_configured": true, 00:16:43.257 "data_offset": 2048, 00:16:43.257 "data_size": 63488 00:16:43.257 }, 00:16:43.257 { 00:16:43.257 "name": "BaseBdev2", 00:16:43.257 "uuid": "70752daa-0c5f-491e-970a-f99c06018eaa", 00:16:43.257 "is_configured": true, 00:16:43.257 "data_offset": 2048, 00:16:43.257 "data_size": 63488 00:16:43.257 }, 00:16:43.257 { 00:16:43.257 "name": "BaseBdev3", 00:16:43.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.257 "is_configured": false, 00:16:43.257 "data_offset": 0, 00:16:43.257 "data_size": 0 00:16:43.257 }, 00:16:43.257 { 00:16:43.257 "name": "BaseBdev4", 00:16:43.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.257 "is_configured": false, 00:16:43.257 "data_offset": 0, 00:16:43.257 "data_size": 0 00:16:43.257 } 00:16:43.257 ] 00:16:43.257 }' 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.257 10:01:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.517 [2024-10-21 10:01:20.068867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.517 BaseBdev3 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.517 [ 00:16:43.517 { 00:16:43.517 "name": "BaseBdev3", 00:16:43.517 "aliases": [ 00:16:43.517 "cc3339cb-315a-47f7-a221-0965fd9486ae" 00:16:43.517 ], 00:16:43.517 "product_name": "Malloc disk", 00:16:43.517 "block_size": 512, 00:16:43.517 "num_blocks": 65536, 00:16:43.517 "uuid": "cc3339cb-315a-47f7-a221-0965fd9486ae", 00:16:43.517 "assigned_rate_limits": { 00:16:43.517 "rw_ios_per_sec": 0, 00:16:43.517 "rw_mbytes_per_sec": 0, 00:16:43.517 "r_mbytes_per_sec": 0, 00:16:43.517 "w_mbytes_per_sec": 0 00:16:43.517 }, 00:16:43.517 "claimed": true, 00:16:43.517 "claim_type": "exclusive_write", 00:16:43.517 "zoned": false, 00:16:43.517 "supported_io_types": { 00:16:43.517 "read": true, 00:16:43.517 "write": true, 00:16:43.517 "unmap": true, 00:16:43.517 "flush": true, 00:16:43.517 "reset": true, 00:16:43.517 "nvme_admin": false, 00:16:43.517 "nvme_io": false, 00:16:43.517 "nvme_io_md": false, 00:16:43.517 "write_zeroes": true, 00:16:43.517 "zcopy": true, 00:16:43.517 "get_zone_info": false, 00:16:43.517 "zone_management": false, 00:16:43.517 "zone_append": false, 00:16:43.517 "compare": false, 00:16:43.517 "compare_and_write": false, 00:16:43.517 "abort": true, 00:16:43.517 "seek_hole": false, 00:16:43.517 "seek_data": false, 00:16:43.517 "copy": true, 00:16:43.517 "nvme_iov_md": false 00:16:43.517 }, 00:16:43.517 "memory_domains": [ 00:16:43.517 { 00:16:43.517 "dma_device_id": "system", 00:16:43.517 "dma_device_type": 1 00:16:43.517 }, 00:16:43.517 { 00:16:43.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.517 "dma_device_type": 2 00:16:43.517 } 00:16:43.517 ], 00:16:43.517 "driver_specific": {} 00:16:43.517 } 00:16:43.517 ] 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:43.517 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.777 "name": "Existed_Raid", 00:16:43.777 "uuid": "f2670427-58a6-420a-96c2-1e025cad62c7", 00:16:43.777 "strip_size_kb": 64, 00:16:43.777 "state": "configuring", 00:16:43.777 "raid_level": "raid5f", 00:16:43.777 "superblock": true, 00:16:43.777 "num_base_bdevs": 4, 00:16:43.777 "num_base_bdevs_discovered": 3, 00:16:43.777 "num_base_bdevs_operational": 4, 00:16:43.777 "base_bdevs_list": [ 00:16:43.777 { 00:16:43.777 "name": "BaseBdev1", 00:16:43.777 "uuid": "fa4da0c1-cd75-43e8-bb1a-f05a4dfbcc8f", 00:16:43.777 "is_configured": true, 00:16:43.777 "data_offset": 2048, 00:16:43.777 "data_size": 63488 00:16:43.777 }, 00:16:43.777 { 00:16:43.777 "name": "BaseBdev2", 00:16:43.777 "uuid": "70752daa-0c5f-491e-970a-f99c06018eaa", 00:16:43.777 "is_configured": true, 00:16:43.777 "data_offset": 2048, 00:16:43.777 "data_size": 63488 00:16:43.777 }, 00:16:43.777 { 00:16:43.777 "name": "BaseBdev3", 00:16:43.777 "uuid": "cc3339cb-315a-47f7-a221-0965fd9486ae", 00:16:43.777 "is_configured": true, 00:16:43.777 "data_offset": 2048, 00:16:43.777 "data_size": 63488 00:16:43.777 }, 00:16:43.777 { 00:16:43.777 "name": "BaseBdev4", 00:16:43.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.777 "is_configured": false, 00:16:43.777 "data_offset": 0, 00:16:43.777 "data_size": 0 00:16:43.777 } 00:16:43.777 ] 00:16:43.777 }' 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.777 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.037 [2024-10-21 10:01:20.579446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:44.037 [2024-10-21 10:01:20.579818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:44.037 [2024-10-21 10:01:20.579837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:44.037 [2024-10-21 10:01:20.580141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:44.037 BaseBdev4 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.037 [2024-10-21 10:01:20.588650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:44.037 [2024-10-21 10:01:20.588717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:16:44.037 [2024-10-21 10:01:20.589037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.037 [ 00:16:44.037 { 00:16:44.037 "name": "BaseBdev4", 00:16:44.037 "aliases": [ 00:16:44.037 "dfd46c53-8489-4c71-98cf-9929e5d6fcde" 00:16:44.037 ], 00:16:44.037 "product_name": "Malloc disk", 00:16:44.037 "block_size": 512, 00:16:44.037 "num_blocks": 65536, 00:16:44.037 "uuid": "dfd46c53-8489-4c71-98cf-9929e5d6fcde", 00:16:44.037 "assigned_rate_limits": { 00:16:44.037 "rw_ios_per_sec": 0, 00:16:44.037 "rw_mbytes_per_sec": 0, 00:16:44.037 "r_mbytes_per_sec": 0, 00:16:44.037 "w_mbytes_per_sec": 0 00:16:44.037 }, 00:16:44.037 "claimed": true, 00:16:44.037 "claim_type": "exclusive_write", 00:16:44.037 "zoned": false, 00:16:44.037 "supported_io_types": { 00:16:44.037 "read": true, 00:16:44.037 "write": true, 00:16:44.037 "unmap": true, 00:16:44.037 "flush": true, 00:16:44.037 "reset": true, 00:16:44.037 "nvme_admin": false, 00:16:44.037 "nvme_io": false, 00:16:44.037 "nvme_io_md": false, 00:16:44.037 "write_zeroes": true, 00:16:44.037 "zcopy": true, 00:16:44.037 "get_zone_info": false, 00:16:44.037 "zone_management": false, 00:16:44.037 "zone_append": false, 00:16:44.037 "compare": false, 00:16:44.037 "compare_and_write": false, 00:16:44.037 "abort": true, 00:16:44.037 "seek_hole": false, 00:16:44.037 "seek_data": false, 00:16:44.037 "copy": true, 00:16:44.037 "nvme_iov_md": false 00:16:44.037 }, 00:16:44.037 "memory_domains": [ 00:16:44.037 { 00:16:44.037 "dma_device_id": "system", 00:16:44.037 "dma_device_type": 1 00:16:44.037 }, 00:16:44.037 { 00:16:44.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.037 "dma_device_type": 2 00:16:44.037 } 00:16:44.037 ], 00:16:44.037 "driver_specific": {} 00:16:44.037 } 00:16:44.037 ] 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.037 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.038 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.298 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.298 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.298 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.298 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.298 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.298 "name": "Existed_Raid", 00:16:44.298 "uuid": "f2670427-58a6-420a-96c2-1e025cad62c7", 00:16:44.298 "strip_size_kb": 64, 00:16:44.298 "state": "online", 00:16:44.298 "raid_level": "raid5f", 00:16:44.298 "superblock": true, 00:16:44.298 "num_base_bdevs": 4, 00:16:44.298 "num_base_bdevs_discovered": 4, 00:16:44.298 "num_base_bdevs_operational": 4, 00:16:44.298 "base_bdevs_list": [ 00:16:44.298 { 00:16:44.298 "name": "BaseBdev1", 00:16:44.298 "uuid": "fa4da0c1-cd75-43e8-bb1a-f05a4dfbcc8f", 00:16:44.298 "is_configured": true, 00:16:44.298 "data_offset": 2048, 00:16:44.298 "data_size": 63488 00:16:44.298 }, 00:16:44.298 { 00:16:44.298 "name": "BaseBdev2", 00:16:44.298 "uuid": "70752daa-0c5f-491e-970a-f99c06018eaa", 00:16:44.298 "is_configured": true, 00:16:44.298 "data_offset": 2048, 00:16:44.298 "data_size": 63488 00:16:44.298 }, 00:16:44.298 { 00:16:44.298 "name": "BaseBdev3", 00:16:44.298 "uuid": "cc3339cb-315a-47f7-a221-0965fd9486ae", 00:16:44.298 "is_configured": true, 00:16:44.298 "data_offset": 2048, 00:16:44.298 "data_size": 63488 00:16:44.298 }, 00:16:44.298 { 00:16:44.298 "name": "BaseBdev4", 00:16:44.298 "uuid": "dfd46c53-8489-4c71-98cf-9929e5d6fcde", 00:16:44.298 "is_configured": true, 00:16:44.298 "data_offset": 2048, 00:16:44.298 "data_size": 63488 00:16:44.298 } 00:16:44.298 ] 00:16:44.298 }' 00:16:44.298 10:01:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.298 10:01:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.558 [2024-10-21 10:01:21.089339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.558 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.558 "name": "Existed_Raid", 00:16:44.558 "aliases": [ 00:16:44.558 "f2670427-58a6-420a-96c2-1e025cad62c7" 00:16:44.558 ], 00:16:44.558 "product_name": "Raid Volume", 00:16:44.558 "block_size": 512, 00:16:44.558 "num_blocks": 190464, 00:16:44.558 "uuid": "f2670427-58a6-420a-96c2-1e025cad62c7", 00:16:44.558 "assigned_rate_limits": { 00:16:44.558 "rw_ios_per_sec": 0, 00:16:44.558 "rw_mbytes_per_sec": 0, 00:16:44.558 "r_mbytes_per_sec": 0, 00:16:44.558 "w_mbytes_per_sec": 0 00:16:44.558 }, 00:16:44.558 "claimed": false, 00:16:44.558 "zoned": false, 00:16:44.558 "supported_io_types": { 00:16:44.558 "read": true, 00:16:44.558 "write": true, 00:16:44.558 "unmap": false, 00:16:44.558 "flush": false, 00:16:44.558 "reset": true, 00:16:44.558 "nvme_admin": false, 00:16:44.558 "nvme_io": false, 00:16:44.558 "nvme_io_md": false, 00:16:44.558 "write_zeroes": true, 00:16:44.558 "zcopy": false, 00:16:44.558 "get_zone_info": false, 00:16:44.558 "zone_management": false, 00:16:44.558 "zone_append": false, 00:16:44.559 "compare": false, 00:16:44.559 "compare_and_write": false, 00:16:44.559 "abort": false, 00:16:44.559 "seek_hole": false, 00:16:44.559 "seek_data": false, 00:16:44.559 "copy": false, 00:16:44.559 "nvme_iov_md": false 00:16:44.559 }, 00:16:44.559 "driver_specific": { 00:16:44.559 "raid": { 00:16:44.559 "uuid": "f2670427-58a6-420a-96c2-1e025cad62c7", 00:16:44.559 "strip_size_kb": 64, 00:16:44.559 "state": "online", 00:16:44.559 "raid_level": "raid5f", 00:16:44.559 "superblock": true, 00:16:44.559 "num_base_bdevs": 4, 00:16:44.559 "num_base_bdevs_discovered": 4, 00:16:44.559 "num_base_bdevs_operational": 4, 00:16:44.559 "base_bdevs_list": [ 00:16:44.559 { 00:16:44.559 "name": "BaseBdev1", 00:16:44.559 "uuid": "fa4da0c1-cd75-43e8-bb1a-f05a4dfbcc8f", 00:16:44.559 "is_configured": true, 00:16:44.559 "data_offset": 2048, 00:16:44.559 "data_size": 63488 00:16:44.559 }, 00:16:44.559 { 00:16:44.559 "name": "BaseBdev2", 00:16:44.559 "uuid": "70752daa-0c5f-491e-970a-f99c06018eaa", 00:16:44.559 "is_configured": true, 00:16:44.559 "data_offset": 2048, 00:16:44.559 "data_size": 63488 00:16:44.559 }, 00:16:44.559 { 00:16:44.559 "name": "BaseBdev3", 00:16:44.559 "uuid": "cc3339cb-315a-47f7-a221-0965fd9486ae", 00:16:44.559 "is_configured": true, 00:16:44.559 "data_offset": 2048, 00:16:44.559 "data_size": 63488 00:16:44.559 }, 00:16:44.559 { 00:16:44.559 "name": "BaseBdev4", 00:16:44.559 "uuid": "dfd46c53-8489-4c71-98cf-9929e5d6fcde", 00:16:44.559 "is_configured": true, 00:16:44.559 "data_offset": 2048, 00:16:44.559 "data_size": 63488 00:16:44.559 } 00:16:44.559 ] 00:16:44.559 } 00:16:44.559 } 00:16:44.559 }' 00:16:44.559 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:44.819 BaseBdev2 00:16:44.819 BaseBdev3 00:16:44.819 BaseBdev4' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.819 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.819 [2024-10-21 10:01:21.412605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.080 "name": "Existed_Raid", 00:16:45.080 "uuid": "f2670427-58a6-420a-96c2-1e025cad62c7", 00:16:45.080 "strip_size_kb": 64, 00:16:45.080 "state": "online", 00:16:45.080 "raid_level": "raid5f", 00:16:45.080 "superblock": true, 00:16:45.080 "num_base_bdevs": 4, 00:16:45.080 "num_base_bdevs_discovered": 3, 00:16:45.080 "num_base_bdevs_operational": 3, 00:16:45.080 "base_bdevs_list": [ 00:16:45.080 { 00:16:45.080 "name": null, 00:16:45.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.080 "is_configured": false, 00:16:45.080 "data_offset": 0, 00:16:45.080 "data_size": 63488 00:16:45.080 }, 00:16:45.080 { 00:16:45.080 "name": "BaseBdev2", 00:16:45.080 "uuid": "70752daa-0c5f-491e-970a-f99c06018eaa", 00:16:45.080 "is_configured": true, 00:16:45.080 "data_offset": 2048, 00:16:45.080 "data_size": 63488 00:16:45.080 }, 00:16:45.080 { 00:16:45.080 "name": "BaseBdev3", 00:16:45.080 "uuid": "cc3339cb-315a-47f7-a221-0965fd9486ae", 00:16:45.080 "is_configured": true, 00:16:45.080 "data_offset": 2048, 00:16:45.080 "data_size": 63488 00:16:45.080 }, 00:16:45.080 { 00:16:45.080 "name": "BaseBdev4", 00:16:45.080 "uuid": "dfd46c53-8489-4c71-98cf-9929e5d6fcde", 00:16:45.080 "is_configured": true, 00:16:45.080 "data_offset": 2048, 00:16:45.080 "data_size": 63488 00:16:45.080 } 00:16:45.080 ] 00:16:45.080 }' 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.080 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.651 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:45.651 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.651 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.651 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.651 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.651 10:01:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.651 10:01:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.651 [2024-10-21 10:01:22.030613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:45.651 [2024-10-21 10:01:22.030798] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.651 [2024-10-21 10:01:22.137572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.651 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.651 [2024-10-21 10:01:22.197476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.078 [2024-10-21 10:01:22.363386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:46.078 [2024-10-21 10:01:22.363492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.078 BaseBdev2 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.078 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.363 [ 00:16:46.363 { 00:16:46.363 "name": "BaseBdev2", 00:16:46.363 "aliases": [ 00:16:46.363 "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065" 00:16:46.363 ], 00:16:46.363 "product_name": "Malloc disk", 00:16:46.363 "block_size": 512, 00:16:46.363 "num_blocks": 65536, 00:16:46.363 "uuid": "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065", 00:16:46.363 "assigned_rate_limits": { 00:16:46.363 "rw_ios_per_sec": 0, 00:16:46.363 "rw_mbytes_per_sec": 0, 00:16:46.363 "r_mbytes_per_sec": 0, 00:16:46.363 "w_mbytes_per_sec": 0 00:16:46.363 }, 00:16:46.363 "claimed": false, 00:16:46.363 "zoned": false, 00:16:46.363 "supported_io_types": { 00:16:46.363 "read": true, 00:16:46.363 "write": true, 00:16:46.363 "unmap": true, 00:16:46.363 "flush": true, 00:16:46.363 "reset": true, 00:16:46.363 "nvme_admin": false, 00:16:46.363 "nvme_io": false, 00:16:46.363 "nvme_io_md": false, 00:16:46.363 "write_zeroes": true, 00:16:46.363 "zcopy": true, 00:16:46.363 "get_zone_info": false, 00:16:46.363 "zone_management": false, 00:16:46.363 "zone_append": false, 00:16:46.363 "compare": false, 00:16:46.363 "compare_and_write": false, 00:16:46.363 "abort": true, 00:16:46.363 "seek_hole": false, 00:16:46.363 "seek_data": false, 00:16:46.363 "copy": true, 00:16:46.363 "nvme_iov_md": false 00:16:46.363 }, 00:16:46.363 "memory_domains": [ 00:16:46.363 { 00:16:46.363 "dma_device_id": "system", 00:16:46.363 "dma_device_type": 1 00:16:46.363 }, 00:16:46.363 { 00:16:46.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.363 "dma_device_type": 2 00:16:46.363 } 00:16:46.363 ], 00:16:46.363 "driver_specific": {} 00:16:46.363 } 00:16:46.363 ] 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.363 BaseBdev3 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.363 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.363 [ 00:16:46.363 { 00:16:46.363 "name": "BaseBdev3", 00:16:46.363 "aliases": [ 00:16:46.363 "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e" 00:16:46.363 ], 00:16:46.364 "product_name": "Malloc disk", 00:16:46.364 "block_size": 512, 00:16:46.364 "num_blocks": 65536, 00:16:46.364 "uuid": "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e", 00:16:46.364 "assigned_rate_limits": { 00:16:46.364 "rw_ios_per_sec": 0, 00:16:46.364 "rw_mbytes_per_sec": 0, 00:16:46.364 "r_mbytes_per_sec": 0, 00:16:46.364 "w_mbytes_per_sec": 0 00:16:46.364 }, 00:16:46.364 "claimed": false, 00:16:46.364 "zoned": false, 00:16:46.364 "supported_io_types": { 00:16:46.364 "read": true, 00:16:46.364 "write": true, 00:16:46.364 "unmap": true, 00:16:46.364 "flush": true, 00:16:46.364 "reset": true, 00:16:46.364 "nvme_admin": false, 00:16:46.364 "nvme_io": false, 00:16:46.364 "nvme_io_md": false, 00:16:46.364 "write_zeroes": true, 00:16:46.364 "zcopy": true, 00:16:46.364 "get_zone_info": false, 00:16:46.364 "zone_management": false, 00:16:46.364 "zone_append": false, 00:16:46.364 "compare": false, 00:16:46.364 "compare_and_write": false, 00:16:46.364 "abort": true, 00:16:46.364 "seek_hole": false, 00:16:46.364 "seek_data": false, 00:16:46.364 "copy": true, 00:16:46.364 "nvme_iov_md": false 00:16:46.364 }, 00:16:46.364 "memory_domains": [ 00:16:46.364 { 00:16:46.364 "dma_device_id": "system", 00:16:46.364 "dma_device_type": 1 00:16:46.364 }, 00:16:46.364 { 00:16:46.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.364 "dma_device_type": 2 00:16:46.364 } 00:16:46.364 ], 00:16:46.364 "driver_specific": {} 00:16:46.364 } 00:16:46.364 ] 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.364 BaseBdev4 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.364 [ 00:16:46.364 { 00:16:46.364 "name": "BaseBdev4", 00:16:46.364 "aliases": [ 00:16:46.364 "d00ce353-262a-4c93-9fed-337cdcd509c6" 00:16:46.364 ], 00:16:46.364 "product_name": "Malloc disk", 00:16:46.364 "block_size": 512, 00:16:46.364 "num_blocks": 65536, 00:16:46.364 "uuid": "d00ce353-262a-4c93-9fed-337cdcd509c6", 00:16:46.364 "assigned_rate_limits": { 00:16:46.364 "rw_ios_per_sec": 0, 00:16:46.364 "rw_mbytes_per_sec": 0, 00:16:46.364 "r_mbytes_per_sec": 0, 00:16:46.364 "w_mbytes_per_sec": 0 00:16:46.364 }, 00:16:46.364 "claimed": false, 00:16:46.364 "zoned": false, 00:16:46.364 "supported_io_types": { 00:16:46.364 "read": true, 00:16:46.364 "write": true, 00:16:46.364 "unmap": true, 00:16:46.364 "flush": true, 00:16:46.364 "reset": true, 00:16:46.364 "nvme_admin": false, 00:16:46.364 "nvme_io": false, 00:16:46.364 "nvme_io_md": false, 00:16:46.364 "write_zeroes": true, 00:16:46.364 "zcopy": true, 00:16:46.364 "get_zone_info": false, 00:16:46.364 "zone_management": false, 00:16:46.364 "zone_append": false, 00:16:46.364 "compare": false, 00:16:46.364 "compare_and_write": false, 00:16:46.364 "abort": true, 00:16:46.364 "seek_hole": false, 00:16:46.364 "seek_data": false, 00:16:46.364 "copy": true, 00:16:46.364 "nvme_iov_md": false 00:16:46.364 }, 00:16:46.364 "memory_domains": [ 00:16:46.364 { 00:16:46.364 "dma_device_id": "system", 00:16:46.364 "dma_device_type": 1 00:16:46.364 }, 00:16:46.364 { 00:16:46.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.364 "dma_device_type": 2 00:16:46.364 } 00:16:46.364 ], 00:16:46.364 "driver_specific": {} 00:16:46.364 } 00:16:46.364 ] 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.364 [2024-10-21 10:01:22.809842] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.364 [2024-10-21 10:01:22.809936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.364 [2024-10-21 10:01:22.809982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.364 [2024-10-21 10:01:22.812269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.364 [2024-10-21 10:01:22.812371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.364 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.364 "name": "Existed_Raid", 00:16:46.364 "uuid": "d79e97f7-8d4b-4b11-bc83-c4ba50797560", 00:16:46.364 "strip_size_kb": 64, 00:16:46.364 "state": "configuring", 00:16:46.364 "raid_level": "raid5f", 00:16:46.364 "superblock": true, 00:16:46.364 "num_base_bdevs": 4, 00:16:46.364 "num_base_bdevs_discovered": 3, 00:16:46.364 "num_base_bdevs_operational": 4, 00:16:46.364 "base_bdevs_list": [ 00:16:46.364 { 00:16:46.364 "name": "BaseBdev1", 00:16:46.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.364 "is_configured": false, 00:16:46.364 "data_offset": 0, 00:16:46.364 "data_size": 0 00:16:46.364 }, 00:16:46.364 { 00:16:46.364 "name": "BaseBdev2", 00:16:46.364 "uuid": "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065", 00:16:46.364 "is_configured": true, 00:16:46.364 "data_offset": 2048, 00:16:46.364 "data_size": 63488 00:16:46.364 }, 00:16:46.364 { 00:16:46.364 "name": "BaseBdev3", 00:16:46.364 "uuid": "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e", 00:16:46.364 "is_configured": true, 00:16:46.365 "data_offset": 2048, 00:16:46.365 "data_size": 63488 00:16:46.365 }, 00:16:46.365 { 00:16:46.365 "name": "BaseBdev4", 00:16:46.365 "uuid": "d00ce353-262a-4c93-9fed-337cdcd509c6", 00:16:46.365 "is_configured": true, 00:16:46.365 "data_offset": 2048, 00:16:46.365 "data_size": 63488 00:16:46.365 } 00:16:46.365 ] 00:16:46.365 }' 00:16:46.365 10:01:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.365 10:01:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.944 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:46.944 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.944 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.944 [2024-10-21 10:01:23.261117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:46.944 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.944 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.944 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.944 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.944 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.944 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.944 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.945 "name": "Existed_Raid", 00:16:46.945 "uuid": "d79e97f7-8d4b-4b11-bc83-c4ba50797560", 00:16:46.945 "strip_size_kb": 64, 00:16:46.945 "state": "configuring", 00:16:46.945 "raid_level": "raid5f", 00:16:46.945 "superblock": true, 00:16:46.945 "num_base_bdevs": 4, 00:16:46.945 "num_base_bdevs_discovered": 2, 00:16:46.945 "num_base_bdevs_operational": 4, 00:16:46.945 "base_bdevs_list": [ 00:16:46.945 { 00:16:46.945 "name": "BaseBdev1", 00:16:46.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.945 "is_configured": false, 00:16:46.945 "data_offset": 0, 00:16:46.945 "data_size": 0 00:16:46.945 }, 00:16:46.945 { 00:16:46.945 "name": null, 00:16:46.945 "uuid": "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065", 00:16:46.945 "is_configured": false, 00:16:46.945 "data_offset": 0, 00:16:46.945 "data_size": 63488 00:16:46.945 }, 00:16:46.945 { 00:16:46.945 "name": "BaseBdev3", 00:16:46.945 "uuid": "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e", 00:16:46.945 "is_configured": true, 00:16:46.945 "data_offset": 2048, 00:16:46.945 "data_size": 63488 00:16:46.945 }, 00:16:46.945 { 00:16:46.945 "name": "BaseBdev4", 00:16:46.945 "uuid": "d00ce353-262a-4c93-9fed-337cdcd509c6", 00:16:46.945 "is_configured": true, 00:16:46.945 "data_offset": 2048, 00:16:46.945 "data_size": 63488 00:16:46.945 } 00:16:46.945 ] 00:16:46.945 }' 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.945 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:47.291 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.291 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.291 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.291 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:47.291 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.291 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.291 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.623 [2024-10-21 10:01:23.831102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.623 BaseBdev1 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.623 [ 00:16:47.623 { 00:16:47.623 "name": "BaseBdev1", 00:16:47.623 "aliases": [ 00:16:47.623 "97b74ada-fbfd-4588-93f3-2d025d1061d0" 00:16:47.623 ], 00:16:47.623 "product_name": "Malloc disk", 00:16:47.623 "block_size": 512, 00:16:47.623 "num_blocks": 65536, 00:16:47.623 "uuid": "97b74ada-fbfd-4588-93f3-2d025d1061d0", 00:16:47.623 "assigned_rate_limits": { 00:16:47.623 "rw_ios_per_sec": 0, 00:16:47.623 "rw_mbytes_per_sec": 0, 00:16:47.623 "r_mbytes_per_sec": 0, 00:16:47.623 "w_mbytes_per_sec": 0 00:16:47.623 }, 00:16:47.623 "claimed": true, 00:16:47.623 "claim_type": "exclusive_write", 00:16:47.623 "zoned": false, 00:16:47.623 "supported_io_types": { 00:16:47.623 "read": true, 00:16:47.623 "write": true, 00:16:47.623 "unmap": true, 00:16:47.623 "flush": true, 00:16:47.623 "reset": true, 00:16:47.623 "nvme_admin": false, 00:16:47.623 "nvme_io": false, 00:16:47.623 "nvme_io_md": false, 00:16:47.623 "write_zeroes": true, 00:16:47.623 "zcopy": true, 00:16:47.623 "get_zone_info": false, 00:16:47.623 "zone_management": false, 00:16:47.623 "zone_append": false, 00:16:47.623 "compare": false, 00:16:47.623 "compare_and_write": false, 00:16:47.623 "abort": true, 00:16:47.623 "seek_hole": false, 00:16:47.623 "seek_data": false, 00:16:47.623 "copy": true, 00:16:47.623 "nvme_iov_md": false 00:16:47.623 }, 00:16:47.623 "memory_domains": [ 00:16:47.623 { 00:16:47.623 "dma_device_id": "system", 00:16:47.623 "dma_device_type": 1 00:16:47.623 }, 00:16:47.623 { 00:16:47.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.623 "dma_device_type": 2 00:16:47.623 } 00:16:47.623 ], 00:16:47.623 "driver_specific": {} 00:16:47.623 } 00:16:47.623 ] 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.623 "name": "Existed_Raid", 00:16:47.623 "uuid": "d79e97f7-8d4b-4b11-bc83-c4ba50797560", 00:16:47.623 "strip_size_kb": 64, 00:16:47.623 "state": "configuring", 00:16:47.623 "raid_level": "raid5f", 00:16:47.623 "superblock": true, 00:16:47.623 "num_base_bdevs": 4, 00:16:47.623 "num_base_bdevs_discovered": 3, 00:16:47.623 "num_base_bdevs_operational": 4, 00:16:47.623 "base_bdevs_list": [ 00:16:47.623 { 00:16:47.623 "name": "BaseBdev1", 00:16:47.623 "uuid": "97b74ada-fbfd-4588-93f3-2d025d1061d0", 00:16:47.623 "is_configured": true, 00:16:47.623 "data_offset": 2048, 00:16:47.623 "data_size": 63488 00:16:47.623 }, 00:16:47.623 { 00:16:47.623 "name": null, 00:16:47.623 "uuid": "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065", 00:16:47.623 "is_configured": false, 00:16:47.623 "data_offset": 0, 00:16:47.623 "data_size": 63488 00:16:47.623 }, 00:16:47.623 { 00:16:47.623 "name": "BaseBdev3", 00:16:47.623 "uuid": "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e", 00:16:47.623 "is_configured": true, 00:16:47.623 "data_offset": 2048, 00:16:47.623 "data_size": 63488 00:16:47.623 }, 00:16:47.623 { 00:16:47.623 "name": "BaseBdev4", 00:16:47.623 "uuid": "d00ce353-262a-4c93-9fed-337cdcd509c6", 00:16:47.623 "is_configured": true, 00:16:47.623 "data_offset": 2048, 00:16:47.623 "data_size": 63488 00:16:47.623 } 00:16:47.623 ] 00:16:47.623 }' 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.623 10:01:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.883 [2024-10-21 10:01:24.378232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.883 "name": "Existed_Raid", 00:16:47.883 "uuid": "d79e97f7-8d4b-4b11-bc83-c4ba50797560", 00:16:47.883 "strip_size_kb": 64, 00:16:47.883 "state": "configuring", 00:16:47.883 "raid_level": "raid5f", 00:16:47.883 "superblock": true, 00:16:47.883 "num_base_bdevs": 4, 00:16:47.883 "num_base_bdevs_discovered": 2, 00:16:47.883 "num_base_bdevs_operational": 4, 00:16:47.883 "base_bdevs_list": [ 00:16:47.883 { 00:16:47.883 "name": "BaseBdev1", 00:16:47.883 "uuid": "97b74ada-fbfd-4588-93f3-2d025d1061d0", 00:16:47.883 "is_configured": true, 00:16:47.883 "data_offset": 2048, 00:16:47.883 "data_size": 63488 00:16:47.883 }, 00:16:47.883 { 00:16:47.883 "name": null, 00:16:47.883 "uuid": "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065", 00:16:47.883 "is_configured": false, 00:16:47.883 "data_offset": 0, 00:16:47.883 "data_size": 63488 00:16:47.883 }, 00:16:47.883 { 00:16:47.883 "name": null, 00:16:47.883 "uuid": "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e", 00:16:47.883 "is_configured": false, 00:16:47.883 "data_offset": 0, 00:16:47.883 "data_size": 63488 00:16:47.883 }, 00:16:47.883 { 00:16:47.883 "name": "BaseBdev4", 00:16:47.883 "uuid": "d00ce353-262a-4c93-9fed-337cdcd509c6", 00:16:47.883 "is_configured": true, 00:16:47.883 "data_offset": 2048, 00:16:47.883 "data_size": 63488 00:16:47.883 } 00:16:47.883 ] 00:16:47.883 }' 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.883 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.484 [2024-10-21 10:01:24.881363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.484 "name": "Existed_Raid", 00:16:48.484 "uuid": "d79e97f7-8d4b-4b11-bc83-c4ba50797560", 00:16:48.484 "strip_size_kb": 64, 00:16:48.484 "state": "configuring", 00:16:48.484 "raid_level": "raid5f", 00:16:48.484 "superblock": true, 00:16:48.484 "num_base_bdevs": 4, 00:16:48.484 "num_base_bdevs_discovered": 3, 00:16:48.484 "num_base_bdevs_operational": 4, 00:16:48.484 "base_bdevs_list": [ 00:16:48.484 { 00:16:48.484 "name": "BaseBdev1", 00:16:48.484 "uuid": "97b74ada-fbfd-4588-93f3-2d025d1061d0", 00:16:48.484 "is_configured": true, 00:16:48.484 "data_offset": 2048, 00:16:48.484 "data_size": 63488 00:16:48.484 }, 00:16:48.484 { 00:16:48.484 "name": null, 00:16:48.484 "uuid": "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065", 00:16:48.484 "is_configured": false, 00:16:48.484 "data_offset": 0, 00:16:48.484 "data_size": 63488 00:16:48.484 }, 00:16:48.484 { 00:16:48.484 "name": "BaseBdev3", 00:16:48.484 "uuid": "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e", 00:16:48.484 "is_configured": true, 00:16:48.484 "data_offset": 2048, 00:16:48.484 "data_size": 63488 00:16:48.484 }, 00:16:48.484 { 00:16:48.484 "name": "BaseBdev4", 00:16:48.484 "uuid": "d00ce353-262a-4c93-9fed-337cdcd509c6", 00:16:48.484 "is_configured": true, 00:16:48.484 "data_offset": 2048, 00:16:48.484 "data_size": 63488 00:16:48.484 } 00:16:48.484 ] 00:16:48.484 }' 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.484 10:01:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.745 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.745 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.745 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.745 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.005 [2024-10-21 10:01:25.380578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.005 "name": "Existed_Raid", 00:16:49.005 "uuid": "d79e97f7-8d4b-4b11-bc83-c4ba50797560", 00:16:49.005 "strip_size_kb": 64, 00:16:49.005 "state": "configuring", 00:16:49.005 "raid_level": "raid5f", 00:16:49.005 "superblock": true, 00:16:49.005 "num_base_bdevs": 4, 00:16:49.005 "num_base_bdevs_discovered": 2, 00:16:49.005 "num_base_bdevs_operational": 4, 00:16:49.005 "base_bdevs_list": [ 00:16:49.005 { 00:16:49.005 "name": null, 00:16:49.005 "uuid": "97b74ada-fbfd-4588-93f3-2d025d1061d0", 00:16:49.005 "is_configured": false, 00:16:49.005 "data_offset": 0, 00:16:49.005 "data_size": 63488 00:16:49.005 }, 00:16:49.005 { 00:16:49.005 "name": null, 00:16:49.005 "uuid": "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065", 00:16:49.005 "is_configured": false, 00:16:49.005 "data_offset": 0, 00:16:49.005 "data_size": 63488 00:16:49.005 }, 00:16:49.005 { 00:16:49.005 "name": "BaseBdev3", 00:16:49.005 "uuid": "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e", 00:16:49.005 "is_configured": true, 00:16:49.005 "data_offset": 2048, 00:16:49.005 "data_size": 63488 00:16:49.005 }, 00:16:49.005 { 00:16:49.005 "name": "BaseBdev4", 00:16:49.005 "uuid": "d00ce353-262a-4c93-9fed-337cdcd509c6", 00:16:49.005 "is_configured": true, 00:16:49.005 "data_offset": 2048, 00:16:49.005 "data_size": 63488 00:16:49.005 } 00:16:49.005 ] 00:16:49.005 }' 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.005 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.574 [2024-10-21 10:01:25.993762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.574 10:01:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.574 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.574 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.574 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.574 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.574 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.574 "name": "Existed_Raid", 00:16:49.574 "uuid": "d79e97f7-8d4b-4b11-bc83-c4ba50797560", 00:16:49.574 "strip_size_kb": 64, 00:16:49.574 "state": "configuring", 00:16:49.574 "raid_level": "raid5f", 00:16:49.574 "superblock": true, 00:16:49.574 "num_base_bdevs": 4, 00:16:49.574 "num_base_bdevs_discovered": 3, 00:16:49.574 "num_base_bdevs_operational": 4, 00:16:49.574 "base_bdevs_list": [ 00:16:49.574 { 00:16:49.574 "name": null, 00:16:49.574 "uuid": "97b74ada-fbfd-4588-93f3-2d025d1061d0", 00:16:49.574 "is_configured": false, 00:16:49.574 "data_offset": 0, 00:16:49.574 "data_size": 63488 00:16:49.574 }, 00:16:49.574 { 00:16:49.574 "name": "BaseBdev2", 00:16:49.574 "uuid": "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065", 00:16:49.574 "is_configured": true, 00:16:49.575 "data_offset": 2048, 00:16:49.575 "data_size": 63488 00:16:49.575 }, 00:16:49.575 { 00:16:49.575 "name": "BaseBdev3", 00:16:49.575 "uuid": "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e", 00:16:49.575 "is_configured": true, 00:16:49.575 "data_offset": 2048, 00:16:49.575 "data_size": 63488 00:16:49.575 }, 00:16:49.575 { 00:16:49.575 "name": "BaseBdev4", 00:16:49.575 "uuid": "d00ce353-262a-4c93-9fed-337cdcd509c6", 00:16:49.575 "is_configured": true, 00:16:49.575 "data_offset": 2048, 00:16:49.575 "data_size": 63488 00:16:49.575 } 00:16:49.575 ] 00:16:49.575 }' 00:16:49.575 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.575 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 97b74ada-fbfd-4588-93f3-2d025d1061d0 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.144 [2024-10-21 10:01:26.613184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:50.144 [2024-10-21 10:01:26.613591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:50.144 [2024-10-21 10:01:26.613610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.144 [2024-10-21 10:01:26.613897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:50.144 NewBaseBdev 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:50.144 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.145 [2024-10-21 10:01:26.621053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:50.145 [2024-10-21 10:01:26.621078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006600 00:16:50.145 [2024-10-21 10:01:26.621322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.145 [ 00:16:50.145 { 00:16:50.145 "name": "NewBaseBdev", 00:16:50.145 "aliases": [ 00:16:50.145 "97b74ada-fbfd-4588-93f3-2d025d1061d0" 00:16:50.145 ], 00:16:50.145 "product_name": "Malloc disk", 00:16:50.145 "block_size": 512, 00:16:50.145 "num_blocks": 65536, 00:16:50.145 "uuid": "97b74ada-fbfd-4588-93f3-2d025d1061d0", 00:16:50.145 "assigned_rate_limits": { 00:16:50.145 "rw_ios_per_sec": 0, 00:16:50.145 "rw_mbytes_per_sec": 0, 00:16:50.145 "r_mbytes_per_sec": 0, 00:16:50.145 "w_mbytes_per_sec": 0 00:16:50.145 }, 00:16:50.145 "claimed": true, 00:16:50.145 "claim_type": "exclusive_write", 00:16:50.145 "zoned": false, 00:16:50.145 "supported_io_types": { 00:16:50.145 "read": true, 00:16:50.145 "write": true, 00:16:50.145 "unmap": true, 00:16:50.145 "flush": true, 00:16:50.145 "reset": true, 00:16:50.145 "nvme_admin": false, 00:16:50.145 "nvme_io": false, 00:16:50.145 "nvme_io_md": false, 00:16:50.145 "write_zeroes": true, 00:16:50.145 "zcopy": true, 00:16:50.145 "get_zone_info": false, 00:16:50.145 "zone_management": false, 00:16:50.145 "zone_append": false, 00:16:50.145 "compare": false, 00:16:50.145 "compare_and_write": false, 00:16:50.145 "abort": true, 00:16:50.145 "seek_hole": false, 00:16:50.145 "seek_data": false, 00:16:50.145 "copy": true, 00:16:50.145 "nvme_iov_md": false 00:16:50.145 }, 00:16:50.145 "memory_domains": [ 00:16:50.145 { 00:16:50.145 "dma_device_id": "system", 00:16:50.145 "dma_device_type": 1 00:16:50.145 }, 00:16:50.145 { 00:16:50.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.145 "dma_device_type": 2 00:16:50.145 } 00:16:50.145 ], 00:16:50.145 "driver_specific": {} 00:16:50.145 } 00:16:50.145 ] 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.145 "name": "Existed_Raid", 00:16:50.145 "uuid": "d79e97f7-8d4b-4b11-bc83-c4ba50797560", 00:16:50.145 "strip_size_kb": 64, 00:16:50.145 "state": "online", 00:16:50.145 "raid_level": "raid5f", 00:16:50.145 "superblock": true, 00:16:50.145 "num_base_bdevs": 4, 00:16:50.145 "num_base_bdevs_discovered": 4, 00:16:50.145 "num_base_bdevs_operational": 4, 00:16:50.145 "base_bdevs_list": [ 00:16:50.145 { 00:16:50.145 "name": "NewBaseBdev", 00:16:50.145 "uuid": "97b74ada-fbfd-4588-93f3-2d025d1061d0", 00:16:50.145 "is_configured": true, 00:16:50.145 "data_offset": 2048, 00:16:50.145 "data_size": 63488 00:16:50.145 }, 00:16:50.145 { 00:16:50.145 "name": "BaseBdev2", 00:16:50.145 "uuid": "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065", 00:16:50.145 "is_configured": true, 00:16:50.145 "data_offset": 2048, 00:16:50.145 "data_size": 63488 00:16:50.145 }, 00:16:50.145 { 00:16:50.145 "name": "BaseBdev3", 00:16:50.145 "uuid": "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e", 00:16:50.145 "is_configured": true, 00:16:50.145 "data_offset": 2048, 00:16:50.145 "data_size": 63488 00:16:50.145 }, 00:16:50.145 { 00:16:50.145 "name": "BaseBdev4", 00:16:50.145 "uuid": "d00ce353-262a-4c93-9fed-337cdcd509c6", 00:16:50.145 "is_configured": true, 00:16:50.145 "data_offset": 2048, 00:16:50.145 "data_size": 63488 00:16:50.145 } 00:16:50.145 ] 00:16:50.145 }' 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.145 10:01:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.715 [2024-10-21 10:01:27.089550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.715 "name": "Existed_Raid", 00:16:50.715 "aliases": [ 00:16:50.715 "d79e97f7-8d4b-4b11-bc83-c4ba50797560" 00:16:50.715 ], 00:16:50.715 "product_name": "Raid Volume", 00:16:50.715 "block_size": 512, 00:16:50.715 "num_blocks": 190464, 00:16:50.715 "uuid": "d79e97f7-8d4b-4b11-bc83-c4ba50797560", 00:16:50.715 "assigned_rate_limits": { 00:16:50.715 "rw_ios_per_sec": 0, 00:16:50.715 "rw_mbytes_per_sec": 0, 00:16:50.715 "r_mbytes_per_sec": 0, 00:16:50.715 "w_mbytes_per_sec": 0 00:16:50.715 }, 00:16:50.715 "claimed": false, 00:16:50.715 "zoned": false, 00:16:50.715 "supported_io_types": { 00:16:50.715 "read": true, 00:16:50.715 "write": true, 00:16:50.715 "unmap": false, 00:16:50.715 "flush": false, 00:16:50.715 "reset": true, 00:16:50.715 "nvme_admin": false, 00:16:50.715 "nvme_io": false, 00:16:50.715 "nvme_io_md": false, 00:16:50.715 "write_zeroes": true, 00:16:50.715 "zcopy": false, 00:16:50.715 "get_zone_info": false, 00:16:50.715 "zone_management": false, 00:16:50.715 "zone_append": false, 00:16:50.715 "compare": false, 00:16:50.715 "compare_and_write": false, 00:16:50.715 "abort": false, 00:16:50.715 "seek_hole": false, 00:16:50.715 "seek_data": false, 00:16:50.715 "copy": false, 00:16:50.715 "nvme_iov_md": false 00:16:50.715 }, 00:16:50.715 "driver_specific": { 00:16:50.715 "raid": { 00:16:50.715 "uuid": "d79e97f7-8d4b-4b11-bc83-c4ba50797560", 00:16:50.715 "strip_size_kb": 64, 00:16:50.715 "state": "online", 00:16:50.715 "raid_level": "raid5f", 00:16:50.715 "superblock": true, 00:16:50.715 "num_base_bdevs": 4, 00:16:50.715 "num_base_bdevs_discovered": 4, 00:16:50.715 "num_base_bdevs_operational": 4, 00:16:50.715 "base_bdevs_list": [ 00:16:50.715 { 00:16:50.715 "name": "NewBaseBdev", 00:16:50.715 "uuid": "97b74ada-fbfd-4588-93f3-2d025d1061d0", 00:16:50.715 "is_configured": true, 00:16:50.715 "data_offset": 2048, 00:16:50.715 "data_size": 63488 00:16:50.715 }, 00:16:50.715 { 00:16:50.715 "name": "BaseBdev2", 00:16:50.715 "uuid": "d801bd5f-1a2c-4ed0-a3c4-fd0aae97b065", 00:16:50.715 "is_configured": true, 00:16:50.715 "data_offset": 2048, 00:16:50.715 "data_size": 63488 00:16:50.715 }, 00:16:50.715 { 00:16:50.715 "name": "BaseBdev3", 00:16:50.715 "uuid": "2ed5b5e9-a3d3-4d12-9f21-9fc2ce7a253e", 00:16:50.715 "is_configured": true, 00:16:50.715 "data_offset": 2048, 00:16:50.715 "data_size": 63488 00:16:50.715 }, 00:16:50.715 { 00:16:50.715 "name": "BaseBdev4", 00:16:50.715 "uuid": "d00ce353-262a-4c93-9fed-337cdcd509c6", 00:16:50.715 "is_configured": true, 00:16:50.715 "data_offset": 2048, 00:16:50.715 "data_size": 63488 00:16:50.715 } 00:16:50.715 ] 00:16:50.715 } 00:16:50.715 } 00:16:50.715 }' 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:50.715 BaseBdev2 00:16:50.715 BaseBdev3 00:16:50.715 BaseBdev4' 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.715 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.716 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:50.716 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.716 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.716 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.716 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.976 [2024-10-21 10:01:27.376777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.976 [2024-10-21 10:01:27.376809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.976 [2024-10-21 10:01:27.376886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.976 [2024-10-21 10:01:27.377198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.976 [2024-10-21 10:01:27.377209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state offline 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83096 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83096 ']' 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83096 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83096 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.976 killing process with pid 83096 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83096' 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83096 00:16:50.976 [2024-10-21 10:01:27.414885] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.976 10:01:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83096 00:16:51.545 [2024-10-21 10:01:27.848420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.491 ************************************ 00:16:52.491 END TEST raid5f_state_function_test_sb 00:16:52.491 ************************************ 00:16:52.491 10:01:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:52.491 00:16:52.491 real 0m11.971s 00:16:52.491 user 0m18.649s 00:16:52.491 sys 0m2.321s 00:16:52.491 10:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.491 10:01:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.751 10:01:29 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:52.751 10:01:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:52.751 10:01:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.751 10:01:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.751 ************************************ 00:16:52.751 START TEST raid5f_superblock_test 00:16:52.751 ************************************ 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83767 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83767 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83767 ']' 00:16:52.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.751 10:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.751 [2024-10-21 10:01:29.251908] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:16:52.751 [2024-10-21 10:01:29.252044] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83767 ] 00:16:53.011 [2024-10-21 10:01:29.418219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.011 [2024-10-21 10:01:29.562798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.271 [2024-10-21 10:01:29.810442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.271 [2024-10-21 10:01:29.810478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.532 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 malloc1 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 [2024-10-21 10:01:30.146742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:53.795 [2024-10-21 10:01:30.146911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.795 [2024-10-21 10:01:30.146963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:53.795 [2024-10-21 10:01:30.146994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.795 [2024-10-21 10:01:30.149424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.795 [2024-10-21 10:01:30.149497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:53.795 pt1 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 malloc2 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 [2024-10-21 10:01:30.215641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:53.795 [2024-10-21 10:01:30.215712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.795 [2024-10-21 10:01:30.215737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:16:53.795 [2024-10-21 10:01:30.215747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.795 [2024-10-21 10:01:30.218097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.795 [2024-10-21 10:01:30.218134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:53.795 pt2 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 malloc3 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 [2024-10-21 10:01:30.298908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:53.795 [2024-10-21 10:01:30.299075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.795 [2024-10-21 10:01:30.299125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:53.795 [2024-10-21 10:01:30.299161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.795 [2024-10-21 10:01:30.301676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.795 [2024-10-21 10:01:30.301752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:53.795 pt3 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 malloc4 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.795 [2024-10-21 10:01:30.367378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:53.795 [2024-10-21 10:01:30.367493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.795 [2024-10-21 10:01:30.367548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:53.795 [2024-10-21 10:01:30.367595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.795 [2024-10-21 10:01:30.369980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.795 [2024-10-21 10:01:30.370049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:53.795 pt4 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:53.795 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.796 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.796 [2024-10-21 10:01:30.379425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:53.796 [2024-10-21 10:01:30.381570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:53.796 [2024-10-21 10:01:30.381728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:53.796 [2024-10-21 10:01:30.381811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:53.796 [2024-10-21 10:01:30.382056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:16:53.796 [2024-10-21 10:01:30.382101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:53.796 [2024-10-21 10:01:30.382374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:54.060 [2024-10-21 10:01:30.390870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:16:54.060 [2024-10-21 10:01:30.390927] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:16:54.060 [2024-10-21 10:01:30.391183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.060 "name": "raid_bdev1", 00:16:54.060 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:54.060 "strip_size_kb": 64, 00:16:54.060 "state": "online", 00:16:54.060 "raid_level": "raid5f", 00:16:54.060 "superblock": true, 00:16:54.060 "num_base_bdevs": 4, 00:16:54.060 "num_base_bdevs_discovered": 4, 00:16:54.060 "num_base_bdevs_operational": 4, 00:16:54.060 "base_bdevs_list": [ 00:16:54.060 { 00:16:54.060 "name": "pt1", 00:16:54.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.060 "is_configured": true, 00:16:54.060 "data_offset": 2048, 00:16:54.060 "data_size": 63488 00:16:54.060 }, 00:16:54.060 { 00:16:54.060 "name": "pt2", 00:16:54.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.060 "is_configured": true, 00:16:54.060 "data_offset": 2048, 00:16:54.060 "data_size": 63488 00:16:54.060 }, 00:16:54.060 { 00:16:54.060 "name": "pt3", 00:16:54.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.060 "is_configured": true, 00:16:54.060 "data_offset": 2048, 00:16:54.060 "data_size": 63488 00:16:54.060 }, 00:16:54.060 { 00:16:54.060 "name": "pt4", 00:16:54.060 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:54.060 "is_configured": true, 00:16:54.060 "data_offset": 2048, 00:16:54.060 "data_size": 63488 00:16:54.060 } 00:16:54.060 ] 00:16:54.060 }' 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.060 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.320 [2024-10-21 10:01:30.879742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.320 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.320 "name": "raid_bdev1", 00:16:54.320 "aliases": [ 00:16:54.320 "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1" 00:16:54.320 ], 00:16:54.320 "product_name": "Raid Volume", 00:16:54.320 "block_size": 512, 00:16:54.320 "num_blocks": 190464, 00:16:54.320 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:54.320 "assigned_rate_limits": { 00:16:54.320 "rw_ios_per_sec": 0, 00:16:54.320 "rw_mbytes_per_sec": 0, 00:16:54.320 "r_mbytes_per_sec": 0, 00:16:54.320 "w_mbytes_per_sec": 0 00:16:54.320 }, 00:16:54.320 "claimed": false, 00:16:54.320 "zoned": false, 00:16:54.320 "supported_io_types": { 00:16:54.320 "read": true, 00:16:54.320 "write": true, 00:16:54.321 "unmap": false, 00:16:54.321 "flush": false, 00:16:54.321 "reset": true, 00:16:54.321 "nvme_admin": false, 00:16:54.321 "nvme_io": false, 00:16:54.321 "nvme_io_md": false, 00:16:54.321 "write_zeroes": true, 00:16:54.321 "zcopy": false, 00:16:54.321 "get_zone_info": false, 00:16:54.321 "zone_management": false, 00:16:54.321 "zone_append": false, 00:16:54.321 "compare": false, 00:16:54.321 "compare_and_write": false, 00:16:54.321 "abort": false, 00:16:54.321 "seek_hole": false, 00:16:54.321 "seek_data": false, 00:16:54.321 "copy": false, 00:16:54.321 "nvme_iov_md": false 00:16:54.321 }, 00:16:54.321 "driver_specific": { 00:16:54.321 "raid": { 00:16:54.321 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:54.321 "strip_size_kb": 64, 00:16:54.321 "state": "online", 00:16:54.321 "raid_level": "raid5f", 00:16:54.321 "superblock": true, 00:16:54.321 "num_base_bdevs": 4, 00:16:54.321 "num_base_bdevs_discovered": 4, 00:16:54.321 "num_base_bdevs_operational": 4, 00:16:54.321 "base_bdevs_list": [ 00:16:54.321 { 00:16:54.321 "name": "pt1", 00:16:54.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.321 "is_configured": true, 00:16:54.321 "data_offset": 2048, 00:16:54.321 "data_size": 63488 00:16:54.321 }, 00:16:54.321 { 00:16:54.321 "name": "pt2", 00:16:54.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.321 "is_configured": true, 00:16:54.321 "data_offset": 2048, 00:16:54.321 "data_size": 63488 00:16:54.321 }, 00:16:54.321 { 00:16:54.321 "name": "pt3", 00:16:54.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.321 "is_configured": true, 00:16:54.321 "data_offset": 2048, 00:16:54.321 "data_size": 63488 00:16:54.321 }, 00:16:54.321 { 00:16:54.321 "name": "pt4", 00:16:54.321 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:54.321 "is_configured": true, 00:16:54.321 "data_offset": 2048, 00:16:54.321 "data_size": 63488 00:16:54.321 } 00:16:54.321 ] 00:16:54.321 } 00:16:54.321 } 00:16:54.321 }' 00:16:54.321 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.582 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:54.582 pt2 00:16:54.582 pt3 00:16:54.582 pt4' 00:16:54.582 10:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.582 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:54.843 [2024-10-21 10:01:31.203171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1 ']' 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.843 [2024-10-21 10:01:31.230951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.843 [2024-10-21 10:01:31.231022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.843 [2024-10-21 10:01:31.231119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.843 [2024-10-21 10:01:31.231245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.843 [2024-10-21 10:01:31.231315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.843 [2024-10-21 10:01:31.386757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:54.843 [2024-10-21 10:01:31.388930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:54.843 [2024-10-21 10:01:31.388973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:54.843 [2024-10-21 10:01:31.389006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:54.843 [2024-10-21 10:01:31.389076] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:54.843 [2024-10-21 10:01:31.389133] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:54.843 [2024-10-21 10:01:31.389152] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:54.843 [2024-10-21 10:01:31.389172] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:54.843 [2024-10-21 10:01:31.389185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.843 [2024-10-21 10:01:31.389196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:16:54.843 request: 00:16:54.843 { 00:16:54.843 "name": "raid_bdev1", 00:16:54.843 "raid_level": "raid5f", 00:16:54.843 "base_bdevs": [ 00:16:54.843 "malloc1", 00:16:54.843 "malloc2", 00:16:54.843 "malloc3", 00:16:54.843 "malloc4" 00:16:54.843 ], 00:16:54.843 "strip_size_kb": 64, 00:16:54.843 "superblock": false, 00:16:54.843 "method": "bdev_raid_create", 00:16:54.843 "req_id": 1 00:16:54.843 } 00:16:54.843 Got JSON-RPC error response 00:16:54.843 response: 00:16:54.843 { 00:16:54.843 "code": -17, 00:16:54.843 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:54.843 } 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:54.843 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.844 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.104 [2024-10-21 10:01:31.442677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:55.104 [2024-10-21 10:01:31.442799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.104 [2024-10-21 10:01:31.442835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:55.104 [2024-10-21 10:01:31.442866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.104 [2024-10-21 10:01:31.445392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.104 [2024-10-21 10:01:31.445467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:55.104 [2024-10-21 10:01:31.445603] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:55.104 [2024-10-21 10:01:31.445685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:55.104 pt1 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.104 "name": "raid_bdev1", 00:16:55.104 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:55.104 "strip_size_kb": 64, 00:16:55.104 "state": "configuring", 00:16:55.104 "raid_level": "raid5f", 00:16:55.104 "superblock": true, 00:16:55.104 "num_base_bdevs": 4, 00:16:55.104 "num_base_bdevs_discovered": 1, 00:16:55.104 "num_base_bdevs_operational": 4, 00:16:55.104 "base_bdevs_list": [ 00:16:55.104 { 00:16:55.104 "name": "pt1", 00:16:55.104 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.104 "is_configured": true, 00:16:55.104 "data_offset": 2048, 00:16:55.104 "data_size": 63488 00:16:55.104 }, 00:16:55.104 { 00:16:55.104 "name": null, 00:16:55.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.104 "is_configured": false, 00:16:55.104 "data_offset": 2048, 00:16:55.104 "data_size": 63488 00:16:55.104 }, 00:16:55.104 { 00:16:55.104 "name": null, 00:16:55.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:55.104 "is_configured": false, 00:16:55.104 "data_offset": 2048, 00:16:55.104 "data_size": 63488 00:16:55.104 }, 00:16:55.104 { 00:16:55.104 "name": null, 00:16:55.104 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:55.104 "is_configured": false, 00:16:55.104 "data_offset": 2048, 00:16:55.104 "data_size": 63488 00:16:55.104 } 00:16:55.104 ] 00:16:55.104 }' 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.104 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.366 [2024-10-21 10:01:31.861961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:55.366 [2024-10-21 10:01:31.862086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.366 [2024-10-21 10:01:31.862113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:55.366 [2024-10-21 10:01:31.862125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.366 [2024-10-21 10:01:31.862721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.366 [2024-10-21 10:01:31.862747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:55.366 [2024-10-21 10:01:31.862846] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:55.366 [2024-10-21 10:01:31.862872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.366 pt2 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.366 [2024-10-21 10:01:31.873945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.366 "name": "raid_bdev1", 00:16:55.366 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:55.366 "strip_size_kb": 64, 00:16:55.366 "state": "configuring", 00:16:55.366 "raid_level": "raid5f", 00:16:55.366 "superblock": true, 00:16:55.366 "num_base_bdevs": 4, 00:16:55.366 "num_base_bdevs_discovered": 1, 00:16:55.366 "num_base_bdevs_operational": 4, 00:16:55.366 "base_bdevs_list": [ 00:16:55.366 { 00:16:55.366 "name": "pt1", 00:16:55.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.366 "is_configured": true, 00:16:55.366 "data_offset": 2048, 00:16:55.366 "data_size": 63488 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "name": null, 00:16:55.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.366 "is_configured": false, 00:16:55.366 "data_offset": 0, 00:16:55.366 "data_size": 63488 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "name": null, 00:16:55.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:55.366 "is_configured": false, 00:16:55.366 "data_offset": 2048, 00:16:55.366 "data_size": 63488 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "name": null, 00:16:55.366 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:55.366 "is_configured": false, 00:16:55.366 "data_offset": 2048, 00:16:55.366 "data_size": 63488 00:16:55.366 } 00:16:55.366 ] 00:16:55.366 }' 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.366 10:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.936 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:55.936 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:55.936 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:55.936 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.936 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.936 [2024-10-21 10:01:32.249308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:55.936 [2024-10-21 10:01:32.249414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.936 [2024-10-21 10:01:32.249470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:55.936 [2024-10-21 10:01:32.249500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.936 [2024-10-21 10:01:32.250047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.936 [2024-10-21 10:01:32.250106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:55.936 [2024-10-21 10:01:32.250224] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:55.936 [2024-10-21 10:01:32.250278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.936 pt2 00:16:55.936 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.936 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:55.936 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:55.936 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.937 [2024-10-21 10:01:32.261265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:55.937 [2024-10-21 10:01:32.261363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.937 [2024-10-21 10:01:32.261399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:55.937 [2024-10-21 10:01:32.261425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.937 [2024-10-21 10:01:32.261847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.937 [2024-10-21 10:01:32.261900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:55.937 [2024-10-21 10:01:32.261989] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:55.937 [2024-10-21 10:01:32.262035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:55.937 pt3 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.937 [2024-10-21 10:01:32.273227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:55.937 [2024-10-21 10:01:32.273277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.937 [2024-10-21 10:01:32.273295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:55.937 [2024-10-21 10:01:32.273303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.937 [2024-10-21 10:01:32.273701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.937 [2024-10-21 10:01:32.273717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:55.937 [2024-10-21 10:01:32.273778] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:55.937 [2024-10-21 10:01:32.273795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:55.937 [2024-10-21 10:01:32.273927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:55.937 [2024-10-21 10:01:32.273936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:55.937 [2024-10-21 10:01:32.274218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:55.937 [2024-10-21 10:01:32.281427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:55.937 [2024-10-21 10:01:32.281450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:55.937 [2024-10-21 10:01:32.281657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.937 pt4 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.937 "name": "raid_bdev1", 00:16:55.937 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:55.937 "strip_size_kb": 64, 00:16:55.937 "state": "online", 00:16:55.937 "raid_level": "raid5f", 00:16:55.937 "superblock": true, 00:16:55.937 "num_base_bdevs": 4, 00:16:55.937 "num_base_bdevs_discovered": 4, 00:16:55.937 "num_base_bdevs_operational": 4, 00:16:55.937 "base_bdevs_list": [ 00:16:55.937 { 00:16:55.937 "name": "pt1", 00:16:55.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:55.937 "is_configured": true, 00:16:55.937 "data_offset": 2048, 00:16:55.937 "data_size": 63488 00:16:55.937 }, 00:16:55.937 { 00:16:55.937 "name": "pt2", 00:16:55.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.937 "is_configured": true, 00:16:55.937 "data_offset": 2048, 00:16:55.937 "data_size": 63488 00:16:55.937 }, 00:16:55.937 { 00:16:55.937 "name": "pt3", 00:16:55.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:55.937 "is_configured": true, 00:16:55.937 "data_offset": 2048, 00:16:55.937 "data_size": 63488 00:16:55.937 }, 00:16:55.937 { 00:16:55.937 "name": "pt4", 00:16:55.937 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:55.937 "is_configured": true, 00:16:55.937 "data_offset": 2048, 00:16:55.937 "data_size": 63488 00:16:55.937 } 00:16:55.937 ] 00:16:55.937 }' 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.937 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.197 [2024-10-21 10:01:32.742317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:56.197 "name": "raid_bdev1", 00:16:56.197 "aliases": [ 00:16:56.197 "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1" 00:16:56.197 ], 00:16:56.197 "product_name": "Raid Volume", 00:16:56.197 "block_size": 512, 00:16:56.197 "num_blocks": 190464, 00:16:56.197 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:56.197 "assigned_rate_limits": { 00:16:56.197 "rw_ios_per_sec": 0, 00:16:56.197 "rw_mbytes_per_sec": 0, 00:16:56.197 "r_mbytes_per_sec": 0, 00:16:56.197 "w_mbytes_per_sec": 0 00:16:56.197 }, 00:16:56.197 "claimed": false, 00:16:56.197 "zoned": false, 00:16:56.197 "supported_io_types": { 00:16:56.197 "read": true, 00:16:56.197 "write": true, 00:16:56.197 "unmap": false, 00:16:56.197 "flush": false, 00:16:56.197 "reset": true, 00:16:56.197 "nvme_admin": false, 00:16:56.197 "nvme_io": false, 00:16:56.197 "nvme_io_md": false, 00:16:56.197 "write_zeroes": true, 00:16:56.197 "zcopy": false, 00:16:56.197 "get_zone_info": false, 00:16:56.197 "zone_management": false, 00:16:56.197 "zone_append": false, 00:16:56.197 "compare": false, 00:16:56.197 "compare_and_write": false, 00:16:56.197 "abort": false, 00:16:56.197 "seek_hole": false, 00:16:56.197 "seek_data": false, 00:16:56.197 "copy": false, 00:16:56.197 "nvme_iov_md": false 00:16:56.197 }, 00:16:56.197 "driver_specific": { 00:16:56.197 "raid": { 00:16:56.197 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:56.197 "strip_size_kb": 64, 00:16:56.197 "state": "online", 00:16:56.197 "raid_level": "raid5f", 00:16:56.197 "superblock": true, 00:16:56.197 "num_base_bdevs": 4, 00:16:56.197 "num_base_bdevs_discovered": 4, 00:16:56.197 "num_base_bdevs_operational": 4, 00:16:56.197 "base_bdevs_list": [ 00:16:56.197 { 00:16:56.197 "name": "pt1", 00:16:56.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:56.197 "is_configured": true, 00:16:56.197 "data_offset": 2048, 00:16:56.197 "data_size": 63488 00:16:56.197 }, 00:16:56.197 { 00:16:56.197 "name": "pt2", 00:16:56.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.197 "is_configured": true, 00:16:56.197 "data_offset": 2048, 00:16:56.197 "data_size": 63488 00:16:56.197 }, 00:16:56.197 { 00:16:56.197 "name": "pt3", 00:16:56.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:56.197 "is_configured": true, 00:16:56.197 "data_offset": 2048, 00:16:56.197 "data_size": 63488 00:16:56.197 }, 00:16:56.197 { 00:16:56.197 "name": "pt4", 00:16:56.197 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:56.197 "is_configured": true, 00:16:56.197 "data_offset": 2048, 00:16:56.197 "data_size": 63488 00:16:56.197 } 00:16:56.197 ] 00:16:56.197 } 00:16:56.197 } 00:16:56.197 }' 00:16:56.197 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:56.457 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:56.457 pt2 00:16:56.457 pt3 00:16:56.457 pt4' 00:16:56.457 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.457 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:56.457 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.457 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:56.457 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.457 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.458 10:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.458 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.458 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.458 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.458 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:56.458 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.458 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.458 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.458 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:56.718 [2024-10-21 10:01:33.065693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1 '!=' 4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1 ']' 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.718 [2024-10-21 10:01:33.109493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.718 "name": "raid_bdev1", 00:16:56.718 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:56.718 "strip_size_kb": 64, 00:16:56.718 "state": "online", 00:16:56.718 "raid_level": "raid5f", 00:16:56.718 "superblock": true, 00:16:56.718 "num_base_bdevs": 4, 00:16:56.718 "num_base_bdevs_discovered": 3, 00:16:56.718 "num_base_bdevs_operational": 3, 00:16:56.718 "base_bdevs_list": [ 00:16:56.718 { 00:16:56.718 "name": null, 00:16:56.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.718 "is_configured": false, 00:16:56.718 "data_offset": 0, 00:16:56.718 "data_size": 63488 00:16:56.718 }, 00:16:56.718 { 00:16:56.718 "name": "pt2", 00:16:56.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.718 "is_configured": true, 00:16:56.718 "data_offset": 2048, 00:16:56.718 "data_size": 63488 00:16:56.718 }, 00:16:56.718 { 00:16:56.718 "name": "pt3", 00:16:56.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:56.718 "is_configured": true, 00:16:56.718 "data_offset": 2048, 00:16:56.718 "data_size": 63488 00:16:56.718 }, 00:16:56.718 { 00:16:56.718 "name": "pt4", 00:16:56.718 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:56.718 "is_configured": true, 00:16:56.718 "data_offset": 2048, 00:16:56.718 "data_size": 63488 00:16:56.718 } 00:16:56.718 ] 00:16:56.718 }' 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.718 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.978 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:56.978 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.978 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.978 [2024-10-21 10:01:33.544705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.978 [2024-10-21 10:01:33.544785] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.978 [2024-10-21 10:01:33.544884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.978 [2024-10-21 10:01:33.545012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.978 [2024-10-21 10:01:33.545058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:56.978 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.978 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:56.978 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.978 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.978 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.978 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.238 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:57.238 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:57.238 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:57.238 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.238 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.239 [2024-10-21 10:01:33.628552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.239 [2024-10-21 10:01:33.628696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.239 [2024-10-21 10:01:33.628722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:57.239 [2024-10-21 10:01:33.628733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.239 [2024-10-21 10:01:33.631393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.239 [2024-10-21 10:01:33.631437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.239 [2024-10-21 10:01:33.631534] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:57.239 [2024-10-21 10:01:33.631600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.239 pt2 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.239 "name": "raid_bdev1", 00:16:57.239 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:57.239 "strip_size_kb": 64, 00:16:57.239 "state": "configuring", 00:16:57.239 "raid_level": "raid5f", 00:16:57.239 "superblock": true, 00:16:57.239 "num_base_bdevs": 4, 00:16:57.239 "num_base_bdevs_discovered": 1, 00:16:57.239 "num_base_bdevs_operational": 3, 00:16:57.239 "base_bdevs_list": [ 00:16:57.239 { 00:16:57.239 "name": null, 00:16:57.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.239 "is_configured": false, 00:16:57.239 "data_offset": 2048, 00:16:57.239 "data_size": 63488 00:16:57.239 }, 00:16:57.239 { 00:16:57.239 "name": "pt2", 00:16:57.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.239 "is_configured": true, 00:16:57.239 "data_offset": 2048, 00:16:57.239 "data_size": 63488 00:16:57.239 }, 00:16:57.239 { 00:16:57.239 "name": null, 00:16:57.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.239 "is_configured": false, 00:16:57.239 "data_offset": 2048, 00:16:57.239 "data_size": 63488 00:16:57.239 }, 00:16:57.239 { 00:16:57.239 "name": null, 00:16:57.239 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.239 "is_configured": false, 00:16:57.239 "data_offset": 2048, 00:16:57.239 "data_size": 63488 00:16:57.239 } 00:16:57.239 ] 00:16:57.239 }' 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.239 10:01:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.808 [2024-10-21 10:01:34.107804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:57.808 [2024-10-21 10:01:34.107938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.808 [2024-10-21 10:01:34.107994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:57.808 [2024-10-21 10:01:34.108029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.808 [2024-10-21 10:01:34.108662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.808 [2024-10-21 10:01:34.108723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:57.808 [2024-10-21 10:01:34.108867] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:57.808 [2024-10-21 10:01:34.108934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:57.808 pt3 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.808 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.809 "name": "raid_bdev1", 00:16:57.809 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:57.809 "strip_size_kb": 64, 00:16:57.809 "state": "configuring", 00:16:57.809 "raid_level": "raid5f", 00:16:57.809 "superblock": true, 00:16:57.809 "num_base_bdevs": 4, 00:16:57.809 "num_base_bdevs_discovered": 2, 00:16:57.809 "num_base_bdevs_operational": 3, 00:16:57.809 "base_bdevs_list": [ 00:16:57.809 { 00:16:57.809 "name": null, 00:16:57.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.809 "is_configured": false, 00:16:57.809 "data_offset": 2048, 00:16:57.809 "data_size": 63488 00:16:57.809 }, 00:16:57.809 { 00:16:57.809 "name": "pt2", 00:16:57.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.809 "is_configured": true, 00:16:57.809 "data_offset": 2048, 00:16:57.809 "data_size": 63488 00:16:57.809 }, 00:16:57.809 { 00:16:57.809 "name": "pt3", 00:16:57.809 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.809 "is_configured": true, 00:16:57.809 "data_offset": 2048, 00:16:57.809 "data_size": 63488 00:16:57.809 }, 00:16:57.809 { 00:16:57.809 "name": null, 00:16:57.809 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.809 "is_configured": false, 00:16:57.809 "data_offset": 2048, 00:16:57.809 "data_size": 63488 00:16:57.809 } 00:16:57.809 ] 00:16:57.809 }' 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.809 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.069 [2024-10-21 10:01:34.527164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:58.069 [2024-10-21 10:01:34.527252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.069 [2024-10-21 10:01:34.527281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:58.069 [2024-10-21 10:01:34.527292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.069 [2024-10-21 10:01:34.527925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.069 [2024-10-21 10:01:34.528003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:58.069 [2024-10-21 10:01:34.528137] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:58.069 [2024-10-21 10:01:34.528168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:58.069 [2024-10-21 10:01:34.528339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:58.069 [2024-10-21 10:01:34.528349] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:58.069 [2024-10-21 10:01:34.528675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:58.069 [2024-10-21 10:01:34.537544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:58.069 [2024-10-21 10:01:34.537575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:58.069 [2024-10-21 10:01:34.537965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.069 pt4 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.069 "name": "raid_bdev1", 00:16:58.069 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:58.069 "strip_size_kb": 64, 00:16:58.069 "state": "online", 00:16:58.069 "raid_level": "raid5f", 00:16:58.069 "superblock": true, 00:16:58.069 "num_base_bdevs": 4, 00:16:58.069 "num_base_bdevs_discovered": 3, 00:16:58.069 "num_base_bdevs_operational": 3, 00:16:58.069 "base_bdevs_list": [ 00:16:58.069 { 00:16:58.069 "name": null, 00:16:58.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.069 "is_configured": false, 00:16:58.069 "data_offset": 2048, 00:16:58.069 "data_size": 63488 00:16:58.069 }, 00:16:58.069 { 00:16:58.069 "name": "pt2", 00:16:58.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.069 "is_configured": true, 00:16:58.069 "data_offset": 2048, 00:16:58.069 "data_size": 63488 00:16:58.069 }, 00:16:58.069 { 00:16:58.069 "name": "pt3", 00:16:58.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.069 "is_configured": true, 00:16:58.069 "data_offset": 2048, 00:16:58.069 "data_size": 63488 00:16:58.069 }, 00:16:58.069 { 00:16:58.069 "name": "pt4", 00:16:58.069 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.069 "is_configured": true, 00:16:58.069 "data_offset": 2048, 00:16:58.069 "data_size": 63488 00:16:58.069 } 00:16:58.069 ] 00:16:58.069 }' 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.069 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.638 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.638 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.638 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.638 [2024-10-21 10:01:34.972523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.638 [2024-10-21 10:01:34.972621] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.638 [2024-10-21 10:01:34.972758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.638 [2024-10-21 10:01:34.972885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.638 [2024-10-21 10:01:34.972943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:58.638 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.638 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.638 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.638 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.638 10:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:58.638 10:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.638 [2024-10-21 10:01:35.048342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.638 [2024-10-21 10:01:35.048468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.638 [2024-10-21 10:01:35.048495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:58.638 [2024-10-21 10:01:35.048509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.638 [2024-10-21 10:01:35.051753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.638 [2024-10-21 10:01:35.051802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.638 [2024-10-21 10:01:35.051916] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:58.638 [2024-10-21 10:01:35.051984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.638 [2024-10-21 10:01:35.052164] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:58.638 [2024-10-21 10:01:35.052181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.638 [2024-10-21 10:01:35.052202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state configuring 00:16:58.638 [2024-10-21 10:01:35.052282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.638 [2024-10-21 10:01:35.052449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:58.638 pt1 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.638 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.639 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.639 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.639 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.639 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.639 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.639 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.639 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.639 "name": "raid_bdev1", 00:16:58.639 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:58.639 "strip_size_kb": 64, 00:16:58.639 "state": "configuring", 00:16:58.639 "raid_level": "raid5f", 00:16:58.639 "superblock": true, 00:16:58.639 "num_base_bdevs": 4, 00:16:58.639 "num_base_bdevs_discovered": 2, 00:16:58.639 "num_base_bdevs_operational": 3, 00:16:58.639 "base_bdevs_list": [ 00:16:58.639 { 00:16:58.639 "name": null, 00:16:58.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.639 "is_configured": false, 00:16:58.639 "data_offset": 2048, 00:16:58.639 "data_size": 63488 00:16:58.639 }, 00:16:58.639 { 00:16:58.639 "name": "pt2", 00:16:58.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.639 "is_configured": true, 00:16:58.639 "data_offset": 2048, 00:16:58.639 "data_size": 63488 00:16:58.639 }, 00:16:58.639 { 00:16:58.639 "name": "pt3", 00:16:58.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.639 "is_configured": true, 00:16:58.639 "data_offset": 2048, 00:16:58.639 "data_size": 63488 00:16:58.639 }, 00:16:58.639 { 00:16:58.639 "name": null, 00:16:58.639 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:58.639 "is_configured": false, 00:16:58.639 "data_offset": 2048, 00:16:58.639 "data_size": 63488 00:16:58.639 } 00:16:58.639 ] 00:16:58.639 }' 00:16:58.639 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.639 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.898 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:58.898 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:58.898 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.898 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.158 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.158 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.159 [2024-10-21 10:01:35.523801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:59.159 [2024-10-21 10:01:35.523942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.159 [2024-10-21 10:01:35.523996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:59.159 [2024-10-21 10:01:35.524033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.159 [2024-10-21 10:01:35.524651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.159 [2024-10-21 10:01:35.524717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:59.159 [2024-10-21 10:01:35.524854] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:59.159 [2024-10-21 10:01:35.524915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:59.159 [2024-10-21 10:01:35.525106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:59.159 [2024-10-21 10:01:35.525150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:59.159 [2024-10-21 10:01:35.525517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:59.159 [2024-10-21 10:01:35.534974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:59.159 [2024-10-21 10:01:35.535043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:59.159 [2024-10-21 10:01:35.535418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.159 pt4 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.159 "name": "raid_bdev1", 00:16:59.159 "uuid": "4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1", 00:16:59.159 "strip_size_kb": 64, 00:16:59.159 "state": "online", 00:16:59.159 "raid_level": "raid5f", 00:16:59.159 "superblock": true, 00:16:59.159 "num_base_bdevs": 4, 00:16:59.159 "num_base_bdevs_discovered": 3, 00:16:59.159 "num_base_bdevs_operational": 3, 00:16:59.159 "base_bdevs_list": [ 00:16:59.159 { 00:16:59.159 "name": null, 00:16:59.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.159 "is_configured": false, 00:16:59.159 "data_offset": 2048, 00:16:59.159 "data_size": 63488 00:16:59.159 }, 00:16:59.159 { 00:16:59.159 "name": "pt2", 00:16:59.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.159 "is_configured": true, 00:16:59.159 "data_offset": 2048, 00:16:59.159 "data_size": 63488 00:16:59.159 }, 00:16:59.159 { 00:16:59.159 "name": "pt3", 00:16:59.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:59.159 "is_configured": true, 00:16:59.159 "data_offset": 2048, 00:16:59.159 "data_size": 63488 00:16:59.159 }, 00:16:59.159 { 00:16:59.159 "name": "pt4", 00:16:59.159 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.159 "is_configured": true, 00:16:59.159 "data_offset": 2048, 00:16:59.159 "data_size": 63488 00:16:59.159 } 00:16:59.159 ] 00:16:59.159 }' 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.159 10:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.728 [2024-10-21 10:01:36.057778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1 '!=' 4cd4dc20-5113-4e4d-ab16-5f7be2b89dd1 ']' 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83767 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83767 ']' 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83767 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83767 00:16:59.728 killing process with pid 83767 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83767' 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 83767 00:16:59.728 [2024-10-21 10:01:36.125928] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.728 [2024-10-21 10:01:36.126060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.728 10:01:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 83767 00:16:59.728 [2024-10-21 10:01:36.126176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.728 [2024-10-21 10:01:36.126193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:59.988 [2024-10-21 10:01:36.571242] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.366 10:01:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:01.366 00:17:01.366 real 0m8.653s 00:17:01.366 user 0m13.270s 00:17:01.366 sys 0m1.688s 00:17:01.366 ************************************ 00:17:01.366 END TEST raid5f_superblock_test 00:17:01.366 ************************************ 00:17:01.366 10:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.366 10:01:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.366 10:01:37 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:01.366 10:01:37 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:01.366 10:01:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:01.366 10:01:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.366 10:01:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.366 ************************************ 00:17:01.366 START TEST raid5f_rebuild_test 00:17:01.366 ************************************ 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84253 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84253 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 84253 ']' 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.366 10:01:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.625 [2024-10-21 10:01:37.983788] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:17:01.625 [2024-10-21 10:01:37.983994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:01.625 Zero copy mechanism will not be used. 00:17:01.625 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84253 ] 00:17:01.625 [2024-10-21 10:01:38.145427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.883 [2024-10-21 10:01:38.282383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.141 [2024-10-21 10:01:38.537794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.141 [2024-10-21 10:01:38.537924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.402 BaseBdev1_malloc 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.402 [2024-10-21 10:01:38.868002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:02.402 [2024-10-21 10:01:38.868131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.402 [2024-10-21 10:01:38.868178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:02.402 [2024-10-21 10:01:38.868213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.402 [2024-10-21 10:01:38.870648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.402 [2024-10-21 10:01:38.870733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.402 BaseBdev1 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.402 BaseBdev2_malloc 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.402 [2024-10-21 10:01:38.931353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:02.402 [2024-10-21 10:01:38.931454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.402 [2024-10-21 10:01:38.931478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:17:02.402 [2024-10-21 10:01:38.931490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.402 [2024-10-21 10:01:38.933918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.402 [2024-10-21 10:01:38.933952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:02.402 BaseBdev2 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.402 10:01:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.662 BaseBdev3_malloc 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.662 [2024-10-21 10:01:39.010099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:02.662 [2024-10-21 10:01:39.010154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.662 [2024-10-21 10:01:39.010178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:17:02.662 [2024-10-21 10:01:39.010189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.662 [2024-10-21 10:01:39.012835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.662 [2024-10-21 10:01:39.012935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:02.662 BaseBdev3 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.662 BaseBdev4_malloc 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.662 [2024-10-21 10:01:39.074975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:02.662 [2024-10-21 10:01:39.075047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.662 [2024-10-21 10:01:39.075067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:02.662 [2024-10-21 10:01:39.075077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.662 [2024-10-21 10:01:39.077460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.662 [2024-10-21 10:01:39.077498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:02.662 BaseBdev4 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.662 spare_malloc 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.662 spare_delay 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.662 [2024-10-21 10:01:39.151847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:02.662 [2024-10-21 10:01:39.151913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.662 [2024-10-21 10:01:39.151932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:02.662 [2024-10-21 10:01:39.151943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.662 [2024-10-21 10:01:39.154349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.662 [2024-10-21 10:01:39.154386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:02.662 spare 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.662 [2024-10-21 10:01:39.163886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.662 [2024-10-21 10:01:39.166045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.662 [2024-10-21 10:01:39.166114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.662 [2024-10-21 10:01:39.166159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:02.662 [2024-10-21 10:01:39.166249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:17:02.662 [2024-10-21 10:01:39.166260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:02.662 [2024-10-21 10:01:39.166516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:02.662 [2024-10-21 10:01:39.175196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:17:02.662 [2024-10-21 10:01:39.175264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:17:02.662 [2024-10-21 10:01:39.175472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.662 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.662 "name": "raid_bdev1", 00:17:02.662 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:02.662 "strip_size_kb": 64, 00:17:02.662 "state": "online", 00:17:02.662 "raid_level": "raid5f", 00:17:02.662 "superblock": false, 00:17:02.662 "num_base_bdevs": 4, 00:17:02.662 "num_base_bdevs_discovered": 4, 00:17:02.662 "num_base_bdevs_operational": 4, 00:17:02.662 "base_bdevs_list": [ 00:17:02.662 { 00:17:02.662 "name": "BaseBdev1", 00:17:02.662 "uuid": "4b425e47-bc3b-55e3-9dfb-a87a27373e4f", 00:17:02.662 "is_configured": true, 00:17:02.662 "data_offset": 0, 00:17:02.662 "data_size": 65536 00:17:02.662 }, 00:17:02.662 { 00:17:02.662 "name": "BaseBdev2", 00:17:02.662 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:02.662 "is_configured": true, 00:17:02.662 "data_offset": 0, 00:17:02.662 "data_size": 65536 00:17:02.662 }, 00:17:02.662 { 00:17:02.662 "name": "BaseBdev3", 00:17:02.662 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:02.662 "is_configured": true, 00:17:02.662 "data_offset": 0, 00:17:02.662 "data_size": 65536 00:17:02.662 }, 00:17:02.662 { 00:17:02.662 "name": "BaseBdev4", 00:17:02.662 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:02.663 "is_configured": true, 00:17:02.663 "data_offset": 0, 00:17:02.663 "data_size": 65536 00:17:02.663 } 00:17:02.663 ] 00:17:02.663 }' 00:17:02.663 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.663 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.232 [2024-10-21 10:01:39.656017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:03.232 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:03.535 [2024-10-21 10:01:39.899504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:03.535 /dev/nbd0 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.535 1+0 records in 00:17:03.535 1+0 records out 00:17:03.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413854 s, 9.9 MB/s 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:03.535 10:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:04.130 512+0 records in 00:17:04.130 512+0 records out 00:17:04.130 100663296 bytes (101 MB, 96 MiB) copied, 0.48979 s, 206 MB/s 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:04.130 [2024-10-21 10:01:40.688904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.130 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.131 [2024-10-21 10:01:40.708308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.131 10:01:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.390 10:01:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.390 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.390 "name": "raid_bdev1", 00:17:04.390 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:04.390 "strip_size_kb": 64, 00:17:04.390 "state": "online", 00:17:04.390 "raid_level": "raid5f", 00:17:04.390 "superblock": false, 00:17:04.390 "num_base_bdevs": 4, 00:17:04.390 "num_base_bdevs_discovered": 3, 00:17:04.390 "num_base_bdevs_operational": 3, 00:17:04.390 "base_bdevs_list": [ 00:17:04.390 { 00:17:04.390 "name": null, 00:17:04.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.390 "is_configured": false, 00:17:04.390 "data_offset": 0, 00:17:04.390 "data_size": 65536 00:17:04.390 }, 00:17:04.390 { 00:17:04.390 "name": "BaseBdev2", 00:17:04.390 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:04.390 "is_configured": true, 00:17:04.390 "data_offset": 0, 00:17:04.390 "data_size": 65536 00:17:04.390 }, 00:17:04.390 { 00:17:04.390 "name": "BaseBdev3", 00:17:04.390 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:04.390 "is_configured": true, 00:17:04.390 "data_offset": 0, 00:17:04.390 "data_size": 65536 00:17:04.390 }, 00:17:04.390 { 00:17:04.390 "name": "BaseBdev4", 00:17:04.390 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:04.390 "is_configured": true, 00:17:04.390 "data_offset": 0, 00:17:04.390 "data_size": 65536 00:17:04.390 } 00:17:04.390 ] 00:17:04.390 }' 00:17:04.390 10:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.390 10:01:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.650 10:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:04.650 10:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.650 10:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.650 [2024-10-21 10:01:41.175464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.650 [2024-10-21 10:01:41.193369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:17:04.650 10:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.650 10:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:04.650 [2024-10-21 10:01:41.204907] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.031 "name": "raid_bdev1", 00:17:06.031 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:06.031 "strip_size_kb": 64, 00:17:06.031 "state": "online", 00:17:06.031 "raid_level": "raid5f", 00:17:06.031 "superblock": false, 00:17:06.031 "num_base_bdevs": 4, 00:17:06.031 "num_base_bdevs_discovered": 4, 00:17:06.031 "num_base_bdevs_operational": 4, 00:17:06.031 "process": { 00:17:06.031 "type": "rebuild", 00:17:06.031 "target": "spare", 00:17:06.031 "progress": { 00:17:06.031 "blocks": 19200, 00:17:06.031 "percent": 9 00:17:06.031 } 00:17:06.031 }, 00:17:06.031 "base_bdevs_list": [ 00:17:06.031 { 00:17:06.031 "name": "spare", 00:17:06.031 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:06.031 "is_configured": true, 00:17:06.031 "data_offset": 0, 00:17:06.031 "data_size": 65536 00:17:06.031 }, 00:17:06.031 { 00:17:06.031 "name": "BaseBdev2", 00:17:06.031 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:06.031 "is_configured": true, 00:17:06.031 "data_offset": 0, 00:17:06.031 "data_size": 65536 00:17:06.031 }, 00:17:06.031 { 00:17:06.031 "name": "BaseBdev3", 00:17:06.031 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:06.031 "is_configured": true, 00:17:06.031 "data_offset": 0, 00:17:06.031 "data_size": 65536 00:17:06.031 }, 00:17:06.031 { 00:17:06.031 "name": "BaseBdev4", 00:17:06.031 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:06.031 "is_configured": true, 00:17:06.031 "data_offset": 0, 00:17:06.031 "data_size": 65536 00:17:06.031 } 00:17:06.031 ] 00:17:06.031 }' 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.031 [2024-10-21 10:01:42.360483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.031 [2024-10-21 10:01:42.412553] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:06.031 [2024-10-21 10:01:42.412699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.031 [2024-10-21 10:01:42.412720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.031 [2024-10-21 10:01:42.412732] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.031 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.031 "name": "raid_bdev1", 00:17:06.031 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:06.031 "strip_size_kb": 64, 00:17:06.031 "state": "online", 00:17:06.031 "raid_level": "raid5f", 00:17:06.031 "superblock": false, 00:17:06.031 "num_base_bdevs": 4, 00:17:06.031 "num_base_bdevs_discovered": 3, 00:17:06.031 "num_base_bdevs_operational": 3, 00:17:06.031 "base_bdevs_list": [ 00:17:06.031 { 00:17:06.031 "name": null, 00:17:06.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.031 "is_configured": false, 00:17:06.031 "data_offset": 0, 00:17:06.032 "data_size": 65536 00:17:06.032 }, 00:17:06.032 { 00:17:06.032 "name": "BaseBdev2", 00:17:06.032 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:06.032 "is_configured": true, 00:17:06.032 "data_offset": 0, 00:17:06.032 "data_size": 65536 00:17:06.032 }, 00:17:06.032 { 00:17:06.032 "name": "BaseBdev3", 00:17:06.032 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:06.032 "is_configured": true, 00:17:06.032 "data_offset": 0, 00:17:06.032 "data_size": 65536 00:17:06.032 }, 00:17:06.032 { 00:17:06.032 "name": "BaseBdev4", 00:17:06.032 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:06.032 "is_configured": true, 00:17:06.032 "data_offset": 0, 00:17:06.032 "data_size": 65536 00:17:06.032 } 00:17:06.032 ] 00:17:06.032 }' 00:17:06.032 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.032 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.292 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.292 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.292 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.292 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.292 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.292 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.292 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.292 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.292 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.552 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.552 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.552 "name": "raid_bdev1", 00:17:06.552 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:06.552 "strip_size_kb": 64, 00:17:06.552 "state": "online", 00:17:06.552 "raid_level": "raid5f", 00:17:06.552 "superblock": false, 00:17:06.552 "num_base_bdevs": 4, 00:17:06.552 "num_base_bdevs_discovered": 3, 00:17:06.552 "num_base_bdevs_operational": 3, 00:17:06.552 "base_bdevs_list": [ 00:17:06.552 { 00:17:06.552 "name": null, 00:17:06.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.552 "is_configured": false, 00:17:06.552 "data_offset": 0, 00:17:06.552 "data_size": 65536 00:17:06.552 }, 00:17:06.552 { 00:17:06.552 "name": "BaseBdev2", 00:17:06.552 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:06.552 "is_configured": true, 00:17:06.552 "data_offset": 0, 00:17:06.552 "data_size": 65536 00:17:06.552 }, 00:17:06.552 { 00:17:06.552 "name": "BaseBdev3", 00:17:06.552 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:06.552 "is_configured": true, 00:17:06.552 "data_offset": 0, 00:17:06.552 "data_size": 65536 00:17:06.552 }, 00:17:06.552 { 00:17:06.552 "name": "BaseBdev4", 00:17:06.552 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:06.552 "is_configured": true, 00:17:06.552 "data_offset": 0, 00:17:06.552 "data_size": 65536 00:17:06.552 } 00:17:06.552 ] 00:17:06.552 }' 00:17:06.552 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.552 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.552 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.552 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.552 10:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:06.552 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.552 10:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.552 [2024-10-21 10:01:42.992939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.552 [2024-10-21 10:01:43.009563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:17:06.552 10:01:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.552 10:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:06.552 [2024-10-21 10:01:43.019533] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.492 "name": "raid_bdev1", 00:17:07.492 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:07.492 "strip_size_kb": 64, 00:17:07.492 "state": "online", 00:17:07.492 "raid_level": "raid5f", 00:17:07.492 "superblock": false, 00:17:07.492 "num_base_bdevs": 4, 00:17:07.492 "num_base_bdevs_discovered": 4, 00:17:07.492 "num_base_bdevs_operational": 4, 00:17:07.492 "process": { 00:17:07.492 "type": "rebuild", 00:17:07.492 "target": "spare", 00:17:07.492 "progress": { 00:17:07.492 "blocks": 19200, 00:17:07.492 "percent": 9 00:17:07.492 } 00:17:07.492 }, 00:17:07.492 "base_bdevs_list": [ 00:17:07.492 { 00:17:07.492 "name": "spare", 00:17:07.492 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:07.492 "is_configured": true, 00:17:07.492 "data_offset": 0, 00:17:07.492 "data_size": 65536 00:17:07.492 }, 00:17:07.492 { 00:17:07.492 "name": "BaseBdev2", 00:17:07.492 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:07.492 "is_configured": true, 00:17:07.492 "data_offset": 0, 00:17:07.492 "data_size": 65536 00:17:07.492 }, 00:17:07.492 { 00:17:07.492 "name": "BaseBdev3", 00:17:07.492 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:07.492 "is_configured": true, 00:17:07.492 "data_offset": 0, 00:17:07.492 "data_size": 65536 00:17:07.492 }, 00:17:07.492 { 00:17:07.492 "name": "BaseBdev4", 00:17:07.492 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:07.492 "is_configured": true, 00:17:07.492 "data_offset": 0, 00:17:07.492 "data_size": 65536 00:17:07.492 } 00:17:07.492 ] 00:17:07.492 }' 00:17:07.492 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=631 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.753 "name": "raid_bdev1", 00:17:07.753 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:07.753 "strip_size_kb": 64, 00:17:07.753 "state": "online", 00:17:07.753 "raid_level": "raid5f", 00:17:07.753 "superblock": false, 00:17:07.753 "num_base_bdevs": 4, 00:17:07.753 "num_base_bdevs_discovered": 4, 00:17:07.753 "num_base_bdevs_operational": 4, 00:17:07.753 "process": { 00:17:07.753 "type": "rebuild", 00:17:07.753 "target": "spare", 00:17:07.753 "progress": { 00:17:07.753 "blocks": 21120, 00:17:07.753 "percent": 10 00:17:07.753 } 00:17:07.753 }, 00:17:07.753 "base_bdevs_list": [ 00:17:07.753 { 00:17:07.753 "name": "spare", 00:17:07.753 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:07.753 "is_configured": true, 00:17:07.753 "data_offset": 0, 00:17:07.753 "data_size": 65536 00:17:07.753 }, 00:17:07.753 { 00:17:07.753 "name": "BaseBdev2", 00:17:07.753 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:07.753 "is_configured": true, 00:17:07.753 "data_offset": 0, 00:17:07.753 "data_size": 65536 00:17:07.753 }, 00:17:07.753 { 00:17:07.753 "name": "BaseBdev3", 00:17:07.753 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:07.753 "is_configured": true, 00:17:07.753 "data_offset": 0, 00:17:07.753 "data_size": 65536 00:17:07.753 }, 00:17:07.753 { 00:17:07.753 "name": "BaseBdev4", 00:17:07.753 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:07.753 "is_configured": true, 00:17:07.753 "data_offset": 0, 00:17:07.753 "data_size": 65536 00:17:07.753 } 00:17:07.753 ] 00:17:07.753 }' 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.753 10:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.692 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.692 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.692 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.692 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.692 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.692 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.952 "name": "raid_bdev1", 00:17:08.952 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:08.952 "strip_size_kb": 64, 00:17:08.952 "state": "online", 00:17:08.952 "raid_level": "raid5f", 00:17:08.952 "superblock": false, 00:17:08.952 "num_base_bdevs": 4, 00:17:08.952 "num_base_bdevs_discovered": 4, 00:17:08.952 "num_base_bdevs_operational": 4, 00:17:08.952 "process": { 00:17:08.952 "type": "rebuild", 00:17:08.952 "target": "spare", 00:17:08.952 "progress": { 00:17:08.952 "blocks": 42240, 00:17:08.952 "percent": 21 00:17:08.952 } 00:17:08.952 }, 00:17:08.952 "base_bdevs_list": [ 00:17:08.952 { 00:17:08.952 "name": "spare", 00:17:08.952 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:08.952 "is_configured": true, 00:17:08.952 "data_offset": 0, 00:17:08.952 "data_size": 65536 00:17:08.952 }, 00:17:08.952 { 00:17:08.952 "name": "BaseBdev2", 00:17:08.952 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:08.952 "is_configured": true, 00:17:08.952 "data_offset": 0, 00:17:08.952 "data_size": 65536 00:17:08.952 }, 00:17:08.952 { 00:17:08.952 "name": "BaseBdev3", 00:17:08.952 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:08.952 "is_configured": true, 00:17:08.952 "data_offset": 0, 00:17:08.952 "data_size": 65536 00:17:08.952 }, 00:17:08.952 { 00:17:08.952 "name": "BaseBdev4", 00:17:08.952 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:08.952 "is_configured": true, 00:17:08.952 "data_offset": 0, 00:17:08.952 "data_size": 65536 00:17:08.952 } 00:17:08.952 ] 00:17:08.952 }' 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.952 10:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.892 "name": "raid_bdev1", 00:17:09.892 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:09.892 "strip_size_kb": 64, 00:17:09.892 "state": "online", 00:17:09.892 "raid_level": "raid5f", 00:17:09.892 "superblock": false, 00:17:09.892 "num_base_bdevs": 4, 00:17:09.892 "num_base_bdevs_discovered": 4, 00:17:09.892 "num_base_bdevs_operational": 4, 00:17:09.892 "process": { 00:17:09.892 "type": "rebuild", 00:17:09.892 "target": "spare", 00:17:09.892 "progress": { 00:17:09.892 "blocks": 63360, 00:17:09.892 "percent": 32 00:17:09.892 } 00:17:09.892 }, 00:17:09.892 "base_bdevs_list": [ 00:17:09.892 { 00:17:09.892 "name": "spare", 00:17:09.892 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:09.892 "is_configured": true, 00:17:09.892 "data_offset": 0, 00:17:09.892 "data_size": 65536 00:17:09.892 }, 00:17:09.892 { 00:17:09.892 "name": "BaseBdev2", 00:17:09.892 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:09.892 "is_configured": true, 00:17:09.892 "data_offset": 0, 00:17:09.892 "data_size": 65536 00:17:09.892 }, 00:17:09.892 { 00:17:09.892 "name": "BaseBdev3", 00:17:09.892 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:09.892 "is_configured": true, 00:17:09.892 "data_offset": 0, 00:17:09.892 "data_size": 65536 00:17:09.892 }, 00:17:09.892 { 00:17:09.892 "name": "BaseBdev4", 00:17:09.892 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:09.892 "is_configured": true, 00:17:09.892 "data_offset": 0, 00:17:09.892 "data_size": 65536 00:17:09.892 } 00:17:09.892 ] 00:17:09.892 }' 00:17:09.892 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.152 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.152 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.152 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.152 10:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.091 "name": "raid_bdev1", 00:17:11.091 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:11.091 "strip_size_kb": 64, 00:17:11.091 "state": "online", 00:17:11.091 "raid_level": "raid5f", 00:17:11.091 "superblock": false, 00:17:11.091 "num_base_bdevs": 4, 00:17:11.091 "num_base_bdevs_discovered": 4, 00:17:11.091 "num_base_bdevs_operational": 4, 00:17:11.091 "process": { 00:17:11.091 "type": "rebuild", 00:17:11.091 "target": "spare", 00:17:11.091 "progress": { 00:17:11.091 "blocks": 86400, 00:17:11.091 "percent": 43 00:17:11.091 } 00:17:11.091 }, 00:17:11.091 "base_bdevs_list": [ 00:17:11.091 { 00:17:11.091 "name": "spare", 00:17:11.091 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:11.091 "is_configured": true, 00:17:11.091 "data_offset": 0, 00:17:11.091 "data_size": 65536 00:17:11.091 }, 00:17:11.091 { 00:17:11.091 "name": "BaseBdev2", 00:17:11.091 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:11.091 "is_configured": true, 00:17:11.091 "data_offset": 0, 00:17:11.091 "data_size": 65536 00:17:11.091 }, 00:17:11.091 { 00:17:11.091 "name": "BaseBdev3", 00:17:11.091 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:11.091 "is_configured": true, 00:17:11.091 "data_offset": 0, 00:17:11.091 "data_size": 65536 00:17:11.091 }, 00:17:11.091 { 00:17:11.091 "name": "BaseBdev4", 00:17:11.091 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:11.091 "is_configured": true, 00:17:11.091 "data_offset": 0, 00:17:11.091 "data_size": 65536 00:17:11.091 } 00:17:11.091 ] 00:17:11.091 }' 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.091 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.351 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.351 10:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.297 "name": "raid_bdev1", 00:17:12.297 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:12.297 "strip_size_kb": 64, 00:17:12.297 "state": "online", 00:17:12.297 "raid_level": "raid5f", 00:17:12.297 "superblock": false, 00:17:12.297 "num_base_bdevs": 4, 00:17:12.297 "num_base_bdevs_discovered": 4, 00:17:12.297 "num_base_bdevs_operational": 4, 00:17:12.297 "process": { 00:17:12.297 "type": "rebuild", 00:17:12.297 "target": "spare", 00:17:12.297 "progress": { 00:17:12.297 "blocks": 107520, 00:17:12.297 "percent": 54 00:17:12.297 } 00:17:12.297 }, 00:17:12.297 "base_bdevs_list": [ 00:17:12.297 { 00:17:12.297 "name": "spare", 00:17:12.297 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:12.297 "is_configured": true, 00:17:12.297 "data_offset": 0, 00:17:12.297 "data_size": 65536 00:17:12.297 }, 00:17:12.297 { 00:17:12.297 "name": "BaseBdev2", 00:17:12.297 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:12.297 "is_configured": true, 00:17:12.297 "data_offset": 0, 00:17:12.297 "data_size": 65536 00:17:12.297 }, 00:17:12.297 { 00:17:12.297 "name": "BaseBdev3", 00:17:12.297 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:12.297 "is_configured": true, 00:17:12.297 "data_offset": 0, 00:17:12.297 "data_size": 65536 00:17:12.297 }, 00:17:12.297 { 00:17:12.297 "name": "BaseBdev4", 00:17:12.297 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:12.297 "is_configured": true, 00:17:12.297 "data_offset": 0, 00:17:12.297 "data_size": 65536 00:17:12.297 } 00:17:12.297 ] 00:17:12.297 }' 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.297 10:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.701 "name": "raid_bdev1", 00:17:13.701 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:13.701 "strip_size_kb": 64, 00:17:13.701 "state": "online", 00:17:13.701 "raid_level": "raid5f", 00:17:13.701 "superblock": false, 00:17:13.701 "num_base_bdevs": 4, 00:17:13.701 "num_base_bdevs_discovered": 4, 00:17:13.701 "num_base_bdevs_operational": 4, 00:17:13.701 "process": { 00:17:13.701 "type": "rebuild", 00:17:13.701 "target": "spare", 00:17:13.701 "progress": { 00:17:13.701 "blocks": 130560, 00:17:13.701 "percent": 66 00:17:13.701 } 00:17:13.701 }, 00:17:13.701 "base_bdevs_list": [ 00:17:13.701 { 00:17:13.701 "name": "spare", 00:17:13.701 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:13.701 "is_configured": true, 00:17:13.701 "data_offset": 0, 00:17:13.701 "data_size": 65536 00:17:13.701 }, 00:17:13.701 { 00:17:13.701 "name": "BaseBdev2", 00:17:13.701 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:13.701 "is_configured": true, 00:17:13.701 "data_offset": 0, 00:17:13.701 "data_size": 65536 00:17:13.701 }, 00:17:13.701 { 00:17:13.701 "name": "BaseBdev3", 00:17:13.701 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:13.701 "is_configured": true, 00:17:13.701 "data_offset": 0, 00:17:13.701 "data_size": 65536 00:17:13.701 }, 00:17:13.701 { 00:17:13.701 "name": "BaseBdev4", 00:17:13.701 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:13.701 "is_configured": true, 00:17:13.701 "data_offset": 0, 00:17:13.701 "data_size": 65536 00:17:13.701 } 00:17:13.701 ] 00:17:13.701 }' 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.701 10:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.701 10:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.701 10:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.638 "name": "raid_bdev1", 00:17:14.638 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:14.638 "strip_size_kb": 64, 00:17:14.638 "state": "online", 00:17:14.638 "raid_level": "raid5f", 00:17:14.638 "superblock": false, 00:17:14.638 "num_base_bdevs": 4, 00:17:14.638 "num_base_bdevs_discovered": 4, 00:17:14.638 "num_base_bdevs_operational": 4, 00:17:14.638 "process": { 00:17:14.638 "type": "rebuild", 00:17:14.638 "target": "spare", 00:17:14.638 "progress": { 00:17:14.638 "blocks": 151680, 00:17:14.638 "percent": 77 00:17:14.638 } 00:17:14.638 }, 00:17:14.638 "base_bdevs_list": [ 00:17:14.638 { 00:17:14.638 "name": "spare", 00:17:14.638 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:14.638 "is_configured": true, 00:17:14.638 "data_offset": 0, 00:17:14.638 "data_size": 65536 00:17:14.638 }, 00:17:14.638 { 00:17:14.638 "name": "BaseBdev2", 00:17:14.638 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:14.638 "is_configured": true, 00:17:14.638 "data_offset": 0, 00:17:14.638 "data_size": 65536 00:17:14.638 }, 00:17:14.638 { 00:17:14.638 "name": "BaseBdev3", 00:17:14.638 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:14.638 "is_configured": true, 00:17:14.638 "data_offset": 0, 00:17:14.638 "data_size": 65536 00:17:14.638 }, 00:17:14.638 { 00:17:14.638 "name": "BaseBdev4", 00:17:14.638 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:14.638 "is_configured": true, 00:17:14.638 "data_offset": 0, 00:17:14.638 "data_size": 65536 00:17:14.638 } 00:17:14.638 ] 00:17:14.638 }' 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.638 10:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.575 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.575 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.575 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.575 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.575 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.575 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.575 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.575 10:01:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.575 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.575 10:01:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.833 10:01:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.833 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.833 "name": "raid_bdev1", 00:17:15.833 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:15.833 "strip_size_kb": 64, 00:17:15.833 "state": "online", 00:17:15.833 "raid_level": "raid5f", 00:17:15.833 "superblock": false, 00:17:15.833 "num_base_bdevs": 4, 00:17:15.833 "num_base_bdevs_discovered": 4, 00:17:15.833 "num_base_bdevs_operational": 4, 00:17:15.833 "process": { 00:17:15.833 "type": "rebuild", 00:17:15.833 "target": "spare", 00:17:15.833 "progress": { 00:17:15.833 "blocks": 172800, 00:17:15.833 "percent": 87 00:17:15.833 } 00:17:15.833 }, 00:17:15.833 "base_bdevs_list": [ 00:17:15.833 { 00:17:15.833 "name": "spare", 00:17:15.833 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:15.833 "is_configured": true, 00:17:15.834 "data_offset": 0, 00:17:15.834 "data_size": 65536 00:17:15.834 }, 00:17:15.834 { 00:17:15.834 "name": "BaseBdev2", 00:17:15.834 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:15.834 "is_configured": true, 00:17:15.834 "data_offset": 0, 00:17:15.834 "data_size": 65536 00:17:15.834 }, 00:17:15.834 { 00:17:15.834 "name": "BaseBdev3", 00:17:15.834 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:15.834 "is_configured": true, 00:17:15.834 "data_offset": 0, 00:17:15.834 "data_size": 65536 00:17:15.834 }, 00:17:15.834 { 00:17:15.834 "name": "BaseBdev4", 00:17:15.834 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:15.834 "is_configured": true, 00:17:15.834 "data_offset": 0, 00:17:15.834 "data_size": 65536 00:17:15.834 } 00:17:15.834 ] 00:17:15.834 }' 00:17:15.834 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.834 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.834 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.834 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.834 10:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.771 "name": "raid_bdev1", 00:17:16.771 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:16.771 "strip_size_kb": 64, 00:17:16.771 "state": "online", 00:17:16.771 "raid_level": "raid5f", 00:17:16.771 "superblock": false, 00:17:16.771 "num_base_bdevs": 4, 00:17:16.771 "num_base_bdevs_discovered": 4, 00:17:16.771 "num_base_bdevs_operational": 4, 00:17:16.771 "process": { 00:17:16.771 "type": "rebuild", 00:17:16.771 "target": "spare", 00:17:16.771 "progress": { 00:17:16.771 "blocks": 195840, 00:17:16.771 "percent": 99 00:17:16.771 } 00:17:16.771 }, 00:17:16.771 "base_bdevs_list": [ 00:17:16.771 { 00:17:16.771 "name": "spare", 00:17:16.771 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:16.771 "is_configured": true, 00:17:16.771 "data_offset": 0, 00:17:16.771 "data_size": 65536 00:17:16.771 }, 00:17:16.771 { 00:17:16.771 "name": "BaseBdev2", 00:17:16.771 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:16.771 "is_configured": true, 00:17:16.771 "data_offset": 0, 00:17:16.771 "data_size": 65536 00:17:16.771 }, 00:17:16.771 { 00:17:16.771 "name": "BaseBdev3", 00:17:16.771 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:16.771 "is_configured": true, 00:17:16.771 "data_offset": 0, 00:17:16.771 "data_size": 65536 00:17:16.771 }, 00:17:16.771 { 00:17:16.771 "name": "BaseBdev4", 00:17:16.771 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:16.771 "is_configured": true, 00:17:16.771 "data_offset": 0, 00:17:16.771 "data_size": 65536 00:17:16.771 } 00:17:16.771 ] 00:17:16.771 }' 00:17:16.771 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.031 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.031 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.031 [2024-10-21 10:01:53.383775] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:17.031 [2024-10-21 10:01:53.383917] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:17.031 [2024-10-21 10:01:53.384007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.031 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.031 10:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.967 "name": "raid_bdev1", 00:17:17.967 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:17.967 "strip_size_kb": 64, 00:17:17.967 "state": "online", 00:17:17.967 "raid_level": "raid5f", 00:17:17.967 "superblock": false, 00:17:17.967 "num_base_bdevs": 4, 00:17:17.967 "num_base_bdevs_discovered": 4, 00:17:17.967 "num_base_bdevs_operational": 4, 00:17:17.967 "base_bdevs_list": [ 00:17:17.967 { 00:17:17.967 "name": "spare", 00:17:17.967 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:17.967 "is_configured": true, 00:17:17.967 "data_offset": 0, 00:17:17.967 "data_size": 65536 00:17:17.967 }, 00:17:17.967 { 00:17:17.967 "name": "BaseBdev2", 00:17:17.967 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:17.967 "is_configured": true, 00:17:17.967 "data_offset": 0, 00:17:17.967 "data_size": 65536 00:17:17.967 }, 00:17:17.967 { 00:17:17.967 "name": "BaseBdev3", 00:17:17.967 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:17.967 "is_configured": true, 00:17:17.967 "data_offset": 0, 00:17:17.967 "data_size": 65536 00:17:17.967 }, 00:17:17.967 { 00:17:17.967 "name": "BaseBdev4", 00:17:17.967 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:17.967 "is_configured": true, 00:17:17.967 "data_offset": 0, 00:17:17.967 "data_size": 65536 00:17:17.967 } 00:17:17.967 ] 00:17:17.967 }' 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:17.967 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.227 "name": "raid_bdev1", 00:17:18.227 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:18.227 "strip_size_kb": 64, 00:17:18.227 "state": "online", 00:17:18.227 "raid_level": "raid5f", 00:17:18.227 "superblock": false, 00:17:18.227 "num_base_bdevs": 4, 00:17:18.227 "num_base_bdevs_discovered": 4, 00:17:18.227 "num_base_bdevs_operational": 4, 00:17:18.227 "base_bdevs_list": [ 00:17:18.227 { 00:17:18.227 "name": "spare", 00:17:18.227 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:18.227 "is_configured": true, 00:17:18.227 "data_offset": 0, 00:17:18.227 "data_size": 65536 00:17:18.227 }, 00:17:18.227 { 00:17:18.227 "name": "BaseBdev2", 00:17:18.227 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:18.227 "is_configured": true, 00:17:18.227 "data_offset": 0, 00:17:18.227 "data_size": 65536 00:17:18.227 }, 00:17:18.227 { 00:17:18.227 "name": "BaseBdev3", 00:17:18.227 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:18.227 "is_configured": true, 00:17:18.227 "data_offset": 0, 00:17:18.227 "data_size": 65536 00:17:18.227 }, 00:17:18.227 { 00:17:18.227 "name": "BaseBdev4", 00:17:18.227 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:18.227 "is_configured": true, 00:17:18.227 "data_offset": 0, 00:17:18.227 "data_size": 65536 00:17:18.227 } 00:17:18.227 ] 00:17:18.227 }' 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.227 "name": "raid_bdev1", 00:17:18.227 "uuid": "5c35165d-1d09-4977-953e-8058f7148c1f", 00:17:18.227 "strip_size_kb": 64, 00:17:18.227 "state": "online", 00:17:18.227 "raid_level": "raid5f", 00:17:18.227 "superblock": false, 00:17:18.227 "num_base_bdevs": 4, 00:17:18.227 "num_base_bdevs_discovered": 4, 00:17:18.227 "num_base_bdevs_operational": 4, 00:17:18.227 "base_bdevs_list": [ 00:17:18.227 { 00:17:18.227 "name": "spare", 00:17:18.227 "uuid": "35dcd70a-a046-5a6d-bdf6-ceb599f9188f", 00:17:18.227 "is_configured": true, 00:17:18.227 "data_offset": 0, 00:17:18.227 "data_size": 65536 00:17:18.227 }, 00:17:18.227 { 00:17:18.227 "name": "BaseBdev2", 00:17:18.227 "uuid": "77acfa71-6495-5831-a751-b48ff9fe6556", 00:17:18.227 "is_configured": true, 00:17:18.227 "data_offset": 0, 00:17:18.227 "data_size": 65536 00:17:18.227 }, 00:17:18.227 { 00:17:18.227 "name": "BaseBdev3", 00:17:18.227 "uuid": "7ca0e72c-63d5-58b2-b761-18c24c49b613", 00:17:18.227 "is_configured": true, 00:17:18.227 "data_offset": 0, 00:17:18.227 "data_size": 65536 00:17:18.227 }, 00:17:18.227 { 00:17:18.227 "name": "BaseBdev4", 00:17:18.227 "uuid": "fbc04c83-8c22-5a05-b9af-3d10b756d89f", 00:17:18.227 "is_configured": true, 00:17:18.227 "data_offset": 0, 00:17:18.227 "data_size": 65536 00:17:18.227 } 00:17:18.227 ] 00:17:18.227 }' 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.227 10:01:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.795 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:18.795 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.795 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.795 [2024-10-21 10:01:55.126201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.795 [2024-10-21 10:01:55.126359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.795 [2024-10-21 10:01:55.126518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.796 [2024-10-21 10:01:55.126677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.796 [2024-10-21 10:01:55.126736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:18.796 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:19.055 /dev/nbd0 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.055 1+0 records in 00:17:19.055 1+0 records out 00:17:19.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522161 s, 7.8 MB/s 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.055 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:19.315 /dev/nbd1 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.315 1+0 records in 00:17:19.315 1+0 records out 00:17:19.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376467 s, 10.9 MB/s 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.315 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:19.573 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:19.573 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.573 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:19.573 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:19.573 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:19.573 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.573 10:01:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.832 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84253 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 84253 ']' 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 84253 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84253 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.095 killing process with pid 84253 00:17:20.095 Received shutdown signal, test time was about 60.000000 seconds 00:17:20.095 00:17:20.095 Latency(us) 00:17:20.095 [2024-10-21T10:01:56.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.095 [2024-10-21T10:01:56.690Z] =================================================================================================================== 00:17:20.095 [2024-10-21T10:01:56.690Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84253' 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 84253 00:17:20.095 [2024-10-21 10:01:56.478308] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.095 10:01:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 84253 00:17:20.669 [2024-10-21 10:01:57.068402] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:22.049 00:17:22.049 real 0m20.646s 00:17:22.049 user 0m24.390s 00:17:22.049 sys 0m2.419s 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.049 ************************************ 00:17:22.049 END TEST raid5f_rebuild_test 00:17:22.049 ************************************ 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.049 10:01:58 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:22.049 10:01:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:22.049 10:01:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.049 10:01:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.049 ************************************ 00:17:22.049 START TEST raid5f_rebuild_test_sb 00:17:22.049 ************************************ 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:22.049 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84781 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84781 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84781 ']' 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.050 10:01:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.310 [2024-10-21 10:01:58.714002] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:17:22.310 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:22.310 Zero copy mechanism will not be used. 00:17:22.310 [2024-10-21 10:01:58.714284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84781 ] 00:17:22.310 [2024-10-21 10:01:58.889044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.569 [2024-10-21 10:01:59.059099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.828 [2024-10-21 10:01:59.363920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.828 [2024-10-21 10:01:59.364013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.086 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.086 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:23.086 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.086 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:23.086 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.086 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.345 BaseBdev1_malloc 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.345 [2024-10-21 10:01:59.692790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:23.345 [2024-10-21 10:01:59.692951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.345 [2024-10-21 10:01:59.692997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:23.345 [2024-10-21 10:01:59.693016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.345 [2024-10-21 10:01:59.695943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.345 [2024-10-21 10:01:59.696040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.345 BaseBdev1 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.345 BaseBdev2_malloc 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.345 [2024-10-21 10:01:59.765859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:23.345 [2024-10-21 10:01:59.765930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.345 [2024-10-21 10:01:59.765952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:17:23.345 [2024-10-21 10:01:59.765966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.345 [2024-10-21 10:01:59.768741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.345 [2024-10-21 10:01:59.768785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:23.345 BaseBdev2 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.345 BaseBdev3_malloc 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.345 [2024-10-21 10:01:59.853933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:23.345 [2024-10-21 10:01:59.854006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.345 [2024-10-21 10:01:59.854034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:17:23.345 [2024-10-21 10:01:59.854048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.345 [2024-10-21 10:01:59.856848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.345 [2024-10-21 10:01:59.856903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:23.345 BaseBdev3 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.345 BaseBdev4_malloc 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.345 [2024-10-21 10:01:59.927741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:23.345 [2024-10-21 10:01:59.927861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.345 [2024-10-21 10:01:59.927890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:23.345 [2024-10-21 10:01:59.927905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.345 [2024-10-21 10:01:59.930687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.345 [2024-10-21 10:01:59.930734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:23.345 BaseBdev4 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.345 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.605 spare_malloc 00:17:23.605 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.605 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:23.605 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.605 10:01:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.605 spare_delay 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.605 [2024-10-21 10:02:00.012434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.605 [2024-10-21 10:02:00.012577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.605 [2024-10-21 10:02:00.012607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:23.605 [2024-10-21 10:02:00.012622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.605 [2024-10-21 10:02:00.015423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.605 [2024-10-21 10:02:00.015473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.605 spare 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.605 [2024-10-21 10:02:00.024491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.605 [2024-10-21 10:02:00.026937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.605 [2024-10-21 10:02:00.027103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.605 [2024-10-21 10:02:00.027171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:23.605 [2024-10-21 10:02:00.027410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:17:23.605 [2024-10-21 10:02:00.027429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:23.605 [2024-10-21 10:02:00.027767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:23.605 [2024-10-21 10:02:00.037794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:17:23.605 [2024-10-21 10:02:00.037869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:17:23.605 [2024-10-21 10:02:00.038139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.605 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.605 "name": "raid_bdev1", 00:17:23.605 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:23.605 "strip_size_kb": 64, 00:17:23.605 "state": "online", 00:17:23.605 "raid_level": "raid5f", 00:17:23.605 "superblock": true, 00:17:23.605 "num_base_bdevs": 4, 00:17:23.605 "num_base_bdevs_discovered": 4, 00:17:23.605 "num_base_bdevs_operational": 4, 00:17:23.605 "base_bdevs_list": [ 00:17:23.605 { 00:17:23.605 "name": "BaseBdev1", 00:17:23.605 "uuid": "51dad384-ed6c-51b4-b922-bb136c0d0430", 00:17:23.605 "is_configured": true, 00:17:23.605 "data_offset": 2048, 00:17:23.605 "data_size": 63488 00:17:23.605 }, 00:17:23.605 { 00:17:23.605 "name": "BaseBdev2", 00:17:23.605 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:23.605 "is_configured": true, 00:17:23.605 "data_offset": 2048, 00:17:23.605 "data_size": 63488 00:17:23.605 }, 00:17:23.605 { 00:17:23.605 "name": "BaseBdev3", 00:17:23.605 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:23.605 "is_configured": true, 00:17:23.606 "data_offset": 2048, 00:17:23.606 "data_size": 63488 00:17:23.606 }, 00:17:23.606 { 00:17:23.606 "name": "BaseBdev4", 00:17:23.606 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:23.606 "is_configured": true, 00:17:23.606 "data_offset": 2048, 00:17:23.606 "data_size": 63488 00:17:23.606 } 00:17:23.606 ] 00:17:23.606 }' 00:17:23.606 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.606 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.173 [2024-10-21 10:02:00.500584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:24.173 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:24.433 [2024-10-21 10:02:00.831847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:24.433 /dev/nbd0 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.433 1+0 records in 00:17:24.433 1+0 records out 00:17:24.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032693 s, 12.5 MB/s 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:24.433 10:02:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:25.002 496+0 records in 00:17:25.002 496+0 records out 00:17:25.002 97517568 bytes (98 MB, 93 MiB) copied, 0.661433 s, 147 MB/s 00:17:25.002 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:25.002 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.002 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:25.002 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.002 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:25.002 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.002 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:25.261 [2024-10-21 10:02:01.830122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.261 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.261 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.261 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.261 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.261 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.261 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.261 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:25.262 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.262 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:25.262 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.262 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.262 [2024-10-21 10:02:01.850318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:25.520 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.520 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:25.520 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.520 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.520 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.521 "name": "raid_bdev1", 00:17:25.521 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:25.521 "strip_size_kb": 64, 00:17:25.521 "state": "online", 00:17:25.521 "raid_level": "raid5f", 00:17:25.521 "superblock": true, 00:17:25.521 "num_base_bdevs": 4, 00:17:25.521 "num_base_bdevs_discovered": 3, 00:17:25.521 "num_base_bdevs_operational": 3, 00:17:25.521 "base_bdevs_list": [ 00:17:25.521 { 00:17:25.521 "name": null, 00:17:25.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.521 "is_configured": false, 00:17:25.521 "data_offset": 0, 00:17:25.521 "data_size": 63488 00:17:25.521 }, 00:17:25.521 { 00:17:25.521 "name": "BaseBdev2", 00:17:25.521 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:25.521 "is_configured": true, 00:17:25.521 "data_offset": 2048, 00:17:25.521 "data_size": 63488 00:17:25.521 }, 00:17:25.521 { 00:17:25.521 "name": "BaseBdev3", 00:17:25.521 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:25.521 "is_configured": true, 00:17:25.521 "data_offset": 2048, 00:17:25.521 "data_size": 63488 00:17:25.521 }, 00:17:25.521 { 00:17:25.521 "name": "BaseBdev4", 00:17:25.521 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:25.521 "is_configured": true, 00:17:25.521 "data_offset": 2048, 00:17:25.521 "data_size": 63488 00:17:25.521 } 00:17:25.521 ] 00:17:25.521 }' 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.521 10:02:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.781 10:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.781 10:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.781 10:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.781 [2024-10-21 10:02:02.337874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.781 [2024-10-21 10:02:02.361553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a7e0 00:17:25.781 10:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.781 10:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:25.781 [2024-10-21 10:02:02.375844] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.157 "name": "raid_bdev1", 00:17:27.157 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:27.157 "strip_size_kb": 64, 00:17:27.157 "state": "online", 00:17:27.157 "raid_level": "raid5f", 00:17:27.157 "superblock": true, 00:17:27.157 "num_base_bdevs": 4, 00:17:27.157 "num_base_bdevs_discovered": 4, 00:17:27.157 "num_base_bdevs_operational": 4, 00:17:27.157 "process": { 00:17:27.157 "type": "rebuild", 00:17:27.157 "target": "spare", 00:17:27.157 "progress": { 00:17:27.157 "blocks": 17280, 00:17:27.157 "percent": 9 00:17:27.157 } 00:17:27.157 }, 00:17:27.157 "base_bdevs_list": [ 00:17:27.157 { 00:17:27.157 "name": "spare", 00:17:27.157 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:27.157 "is_configured": true, 00:17:27.157 "data_offset": 2048, 00:17:27.157 "data_size": 63488 00:17:27.157 }, 00:17:27.157 { 00:17:27.157 "name": "BaseBdev2", 00:17:27.157 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:27.157 "is_configured": true, 00:17:27.157 "data_offset": 2048, 00:17:27.157 "data_size": 63488 00:17:27.157 }, 00:17:27.157 { 00:17:27.157 "name": "BaseBdev3", 00:17:27.157 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:27.157 "is_configured": true, 00:17:27.157 "data_offset": 2048, 00:17:27.157 "data_size": 63488 00:17:27.157 }, 00:17:27.157 { 00:17:27.157 "name": "BaseBdev4", 00:17:27.157 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:27.157 "is_configured": true, 00:17:27.157 "data_offset": 2048, 00:17:27.157 "data_size": 63488 00:17:27.157 } 00:17:27.157 ] 00:17:27.157 }' 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.157 [2024-10-21 10:02:03.531267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.157 [2024-10-21 10:02:03.587390] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:27.157 [2024-10-21 10:02:03.587593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.157 [2024-10-21 10:02:03.587622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.157 [2024-10-21 10:02:03.587638] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.157 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.157 "name": "raid_bdev1", 00:17:27.157 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:27.157 "strip_size_kb": 64, 00:17:27.157 "state": "online", 00:17:27.157 "raid_level": "raid5f", 00:17:27.157 "superblock": true, 00:17:27.157 "num_base_bdevs": 4, 00:17:27.157 "num_base_bdevs_discovered": 3, 00:17:27.157 "num_base_bdevs_operational": 3, 00:17:27.157 "base_bdevs_list": [ 00:17:27.157 { 00:17:27.157 "name": null, 00:17:27.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.157 "is_configured": false, 00:17:27.157 "data_offset": 0, 00:17:27.157 "data_size": 63488 00:17:27.157 }, 00:17:27.157 { 00:17:27.157 "name": "BaseBdev2", 00:17:27.157 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:27.157 "is_configured": true, 00:17:27.157 "data_offset": 2048, 00:17:27.157 "data_size": 63488 00:17:27.157 }, 00:17:27.157 { 00:17:27.157 "name": "BaseBdev3", 00:17:27.157 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:27.157 "is_configured": true, 00:17:27.157 "data_offset": 2048, 00:17:27.157 "data_size": 63488 00:17:27.157 }, 00:17:27.157 { 00:17:27.157 "name": "BaseBdev4", 00:17:27.157 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:27.157 "is_configured": true, 00:17:27.157 "data_offset": 2048, 00:17:27.157 "data_size": 63488 00:17:27.157 } 00:17:27.157 ] 00:17:27.157 }' 00:17:27.158 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.158 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.416 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.416 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.416 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.416 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.416 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.416 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.416 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.416 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.416 10:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.416 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.676 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.676 "name": "raid_bdev1", 00:17:27.676 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:27.676 "strip_size_kb": 64, 00:17:27.676 "state": "online", 00:17:27.676 "raid_level": "raid5f", 00:17:27.676 "superblock": true, 00:17:27.676 "num_base_bdevs": 4, 00:17:27.676 "num_base_bdevs_discovered": 3, 00:17:27.676 "num_base_bdevs_operational": 3, 00:17:27.676 "base_bdevs_list": [ 00:17:27.676 { 00:17:27.676 "name": null, 00:17:27.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.676 "is_configured": false, 00:17:27.676 "data_offset": 0, 00:17:27.676 "data_size": 63488 00:17:27.676 }, 00:17:27.676 { 00:17:27.676 "name": "BaseBdev2", 00:17:27.676 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:27.676 "is_configured": true, 00:17:27.676 "data_offset": 2048, 00:17:27.676 "data_size": 63488 00:17:27.676 }, 00:17:27.676 { 00:17:27.676 "name": "BaseBdev3", 00:17:27.676 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:27.676 "is_configured": true, 00:17:27.676 "data_offset": 2048, 00:17:27.676 "data_size": 63488 00:17:27.676 }, 00:17:27.676 { 00:17:27.676 "name": "BaseBdev4", 00:17:27.676 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:27.676 "is_configured": true, 00:17:27.676 "data_offset": 2048, 00:17:27.676 "data_size": 63488 00:17:27.676 } 00:17:27.676 ] 00:17:27.676 }' 00:17:27.676 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.676 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.676 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.676 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.676 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.676 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.676 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.676 [2024-10-21 10:02:04.125467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.676 [2024-10-21 10:02:04.147205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:17:27.676 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.676 10:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:27.676 [2024-10-21 10:02:04.160501] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.613 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.613 "name": "raid_bdev1", 00:17:28.613 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:28.613 "strip_size_kb": 64, 00:17:28.613 "state": "online", 00:17:28.613 "raid_level": "raid5f", 00:17:28.613 "superblock": true, 00:17:28.613 "num_base_bdevs": 4, 00:17:28.613 "num_base_bdevs_discovered": 4, 00:17:28.613 "num_base_bdevs_operational": 4, 00:17:28.613 "process": { 00:17:28.613 "type": "rebuild", 00:17:28.613 "target": "spare", 00:17:28.613 "progress": { 00:17:28.613 "blocks": 17280, 00:17:28.613 "percent": 9 00:17:28.613 } 00:17:28.613 }, 00:17:28.613 "base_bdevs_list": [ 00:17:28.613 { 00:17:28.613 "name": "spare", 00:17:28.613 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:28.613 "is_configured": true, 00:17:28.613 "data_offset": 2048, 00:17:28.613 "data_size": 63488 00:17:28.613 }, 00:17:28.613 { 00:17:28.613 "name": "BaseBdev2", 00:17:28.613 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:28.613 "is_configured": true, 00:17:28.613 "data_offset": 2048, 00:17:28.613 "data_size": 63488 00:17:28.613 }, 00:17:28.613 { 00:17:28.613 "name": "BaseBdev3", 00:17:28.613 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:28.613 "is_configured": true, 00:17:28.613 "data_offset": 2048, 00:17:28.613 "data_size": 63488 00:17:28.613 }, 00:17:28.613 { 00:17:28.613 "name": "BaseBdev4", 00:17:28.613 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:28.613 "is_configured": true, 00:17:28.613 "data_offset": 2048, 00:17:28.613 "data_size": 63488 00:17:28.613 } 00:17:28.613 ] 00:17:28.613 }' 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:28.940 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=652 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.940 "name": "raid_bdev1", 00:17:28.940 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:28.940 "strip_size_kb": 64, 00:17:28.940 "state": "online", 00:17:28.940 "raid_level": "raid5f", 00:17:28.940 "superblock": true, 00:17:28.940 "num_base_bdevs": 4, 00:17:28.940 "num_base_bdevs_discovered": 4, 00:17:28.940 "num_base_bdevs_operational": 4, 00:17:28.940 "process": { 00:17:28.940 "type": "rebuild", 00:17:28.940 "target": "spare", 00:17:28.940 "progress": { 00:17:28.940 "blocks": 21120, 00:17:28.940 "percent": 11 00:17:28.940 } 00:17:28.940 }, 00:17:28.940 "base_bdevs_list": [ 00:17:28.940 { 00:17:28.940 "name": "spare", 00:17:28.940 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:28.940 "is_configured": true, 00:17:28.940 "data_offset": 2048, 00:17:28.940 "data_size": 63488 00:17:28.940 }, 00:17:28.940 { 00:17:28.940 "name": "BaseBdev2", 00:17:28.940 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:28.940 "is_configured": true, 00:17:28.940 "data_offset": 2048, 00:17:28.940 "data_size": 63488 00:17:28.940 }, 00:17:28.940 { 00:17:28.940 "name": "BaseBdev3", 00:17:28.940 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:28.940 "is_configured": true, 00:17:28.940 "data_offset": 2048, 00:17:28.940 "data_size": 63488 00:17:28.940 }, 00:17:28.940 { 00:17:28.940 "name": "BaseBdev4", 00:17:28.940 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:28.940 "is_configured": true, 00:17:28.940 "data_offset": 2048, 00:17:28.940 "data_size": 63488 00:17:28.940 } 00:17:28.940 ] 00:17:28.940 }' 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.940 10:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.879 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.879 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.879 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.879 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.879 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.879 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.879 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.879 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.879 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.879 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.137 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.137 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.137 "name": "raid_bdev1", 00:17:30.137 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:30.137 "strip_size_kb": 64, 00:17:30.137 "state": "online", 00:17:30.137 "raid_level": "raid5f", 00:17:30.137 "superblock": true, 00:17:30.137 "num_base_bdevs": 4, 00:17:30.137 "num_base_bdevs_discovered": 4, 00:17:30.137 "num_base_bdevs_operational": 4, 00:17:30.137 "process": { 00:17:30.137 "type": "rebuild", 00:17:30.137 "target": "spare", 00:17:30.137 "progress": { 00:17:30.137 "blocks": 42240, 00:17:30.137 "percent": 22 00:17:30.137 } 00:17:30.137 }, 00:17:30.137 "base_bdevs_list": [ 00:17:30.137 { 00:17:30.137 "name": "spare", 00:17:30.137 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:30.137 "is_configured": true, 00:17:30.137 "data_offset": 2048, 00:17:30.137 "data_size": 63488 00:17:30.137 }, 00:17:30.137 { 00:17:30.137 "name": "BaseBdev2", 00:17:30.137 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:30.137 "is_configured": true, 00:17:30.137 "data_offset": 2048, 00:17:30.137 "data_size": 63488 00:17:30.137 }, 00:17:30.137 { 00:17:30.137 "name": "BaseBdev3", 00:17:30.137 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:30.137 "is_configured": true, 00:17:30.137 "data_offset": 2048, 00:17:30.137 "data_size": 63488 00:17:30.137 }, 00:17:30.137 { 00:17:30.137 "name": "BaseBdev4", 00:17:30.137 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:30.137 "is_configured": true, 00:17:30.137 "data_offset": 2048, 00:17:30.137 "data_size": 63488 00:17:30.137 } 00:17:30.137 ] 00:17:30.137 }' 00:17:30.137 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.137 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.137 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.137 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.137 10:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.073 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.331 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.331 "name": "raid_bdev1", 00:17:31.331 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:31.331 "strip_size_kb": 64, 00:17:31.331 "state": "online", 00:17:31.331 "raid_level": "raid5f", 00:17:31.331 "superblock": true, 00:17:31.331 "num_base_bdevs": 4, 00:17:31.331 "num_base_bdevs_discovered": 4, 00:17:31.331 "num_base_bdevs_operational": 4, 00:17:31.331 "process": { 00:17:31.331 "type": "rebuild", 00:17:31.332 "target": "spare", 00:17:31.332 "progress": { 00:17:31.332 "blocks": 65280, 00:17:31.332 "percent": 34 00:17:31.332 } 00:17:31.332 }, 00:17:31.332 "base_bdevs_list": [ 00:17:31.332 { 00:17:31.332 "name": "spare", 00:17:31.332 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:31.332 "is_configured": true, 00:17:31.332 "data_offset": 2048, 00:17:31.332 "data_size": 63488 00:17:31.332 }, 00:17:31.332 { 00:17:31.332 "name": "BaseBdev2", 00:17:31.332 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:31.332 "is_configured": true, 00:17:31.332 "data_offset": 2048, 00:17:31.332 "data_size": 63488 00:17:31.332 }, 00:17:31.332 { 00:17:31.332 "name": "BaseBdev3", 00:17:31.332 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:31.332 "is_configured": true, 00:17:31.332 "data_offset": 2048, 00:17:31.332 "data_size": 63488 00:17:31.332 }, 00:17:31.332 { 00:17:31.332 "name": "BaseBdev4", 00:17:31.332 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:31.332 "is_configured": true, 00:17:31.332 "data_offset": 2048, 00:17:31.332 "data_size": 63488 00:17:31.332 } 00:17:31.332 ] 00:17:31.332 }' 00:17:31.332 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.332 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.332 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.332 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.332 10:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.269 "name": "raid_bdev1", 00:17:32.269 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:32.269 "strip_size_kb": 64, 00:17:32.269 "state": "online", 00:17:32.269 "raid_level": "raid5f", 00:17:32.269 "superblock": true, 00:17:32.269 "num_base_bdevs": 4, 00:17:32.269 "num_base_bdevs_discovered": 4, 00:17:32.269 "num_base_bdevs_operational": 4, 00:17:32.269 "process": { 00:17:32.269 "type": "rebuild", 00:17:32.269 "target": "spare", 00:17:32.269 "progress": { 00:17:32.269 "blocks": 86400, 00:17:32.269 "percent": 45 00:17:32.269 } 00:17:32.269 }, 00:17:32.269 "base_bdevs_list": [ 00:17:32.269 { 00:17:32.269 "name": "spare", 00:17:32.269 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:32.269 "is_configured": true, 00:17:32.269 "data_offset": 2048, 00:17:32.269 "data_size": 63488 00:17:32.269 }, 00:17:32.269 { 00:17:32.269 "name": "BaseBdev2", 00:17:32.269 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:32.269 "is_configured": true, 00:17:32.269 "data_offset": 2048, 00:17:32.269 "data_size": 63488 00:17:32.269 }, 00:17:32.269 { 00:17:32.269 "name": "BaseBdev3", 00:17:32.269 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:32.269 "is_configured": true, 00:17:32.269 "data_offset": 2048, 00:17:32.269 "data_size": 63488 00:17:32.269 }, 00:17:32.269 { 00:17:32.269 "name": "BaseBdev4", 00:17:32.269 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:32.269 "is_configured": true, 00:17:32.269 "data_offset": 2048, 00:17:32.269 "data_size": 63488 00:17:32.269 } 00:17:32.269 ] 00:17:32.269 }' 00:17:32.269 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.528 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.528 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.528 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.528 10:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.467 "name": "raid_bdev1", 00:17:33.467 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:33.467 "strip_size_kb": 64, 00:17:33.467 "state": "online", 00:17:33.467 "raid_level": "raid5f", 00:17:33.467 "superblock": true, 00:17:33.467 "num_base_bdevs": 4, 00:17:33.467 "num_base_bdevs_discovered": 4, 00:17:33.467 "num_base_bdevs_operational": 4, 00:17:33.467 "process": { 00:17:33.467 "type": "rebuild", 00:17:33.467 "target": "spare", 00:17:33.467 "progress": { 00:17:33.467 "blocks": 109440, 00:17:33.467 "percent": 57 00:17:33.467 } 00:17:33.467 }, 00:17:33.467 "base_bdevs_list": [ 00:17:33.467 { 00:17:33.467 "name": "spare", 00:17:33.467 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:33.467 "is_configured": true, 00:17:33.467 "data_offset": 2048, 00:17:33.467 "data_size": 63488 00:17:33.467 }, 00:17:33.467 { 00:17:33.467 "name": "BaseBdev2", 00:17:33.467 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:33.467 "is_configured": true, 00:17:33.467 "data_offset": 2048, 00:17:33.467 "data_size": 63488 00:17:33.467 }, 00:17:33.467 { 00:17:33.467 "name": "BaseBdev3", 00:17:33.467 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:33.467 "is_configured": true, 00:17:33.467 "data_offset": 2048, 00:17:33.467 "data_size": 63488 00:17:33.467 }, 00:17:33.467 { 00:17:33.467 "name": "BaseBdev4", 00:17:33.467 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:33.467 "is_configured": true, 00:17:33.467 "data_offset": 2048, 00:17:33.467 "data_size": 63488 00:17:33.467 } 00:17:33.467 ] 00:17:33.467 }' 00:17:33.467 10:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.467 10:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.467 10:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.727 10:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.727 10:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:34.666 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.666 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.666 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.666 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.666 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.666 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.666 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.666 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.666 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.666 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.667 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.667 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.667 "name": "raid_bdev1", 00:17:34.667 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:34.667 "strip_size_kb": 64, 00:17:34.667 "state": "online", 00:17:34.667 "raid_level": "raid5f", 00:17:34.667 "superblock": true, 00:17:34.667 "num_base_bdevs": 4, 00:17:34.667 "num_base_bdevs_discovered": 4, 00:17:34.667 "num_base_bdevs_operational": 4, 00:17:34.667 "process": { 00:17:34.667 "type": "rebuild", 00:17:34.667 "target": "spare", 00:17:34.667 "progress": { 00:17:34.667 "blocks": 130560, 00:17:34.667 "percent": 68 00:17:34.667 } 00:17:34.667 }, 00:17:34.667 "base_bdevs_list": [ 00:17:34.667 { 00:17:34.667 "name": "spare", 00:17:34.667 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:34.667 "is_configured": true, 00:17:34.667 "data_offset": 2048, 00:17:34.667 "data_size": 63488 00:17:34.667 }, 00:17:34.667 { 00:17:34.667 "name": "BaseBdev2", 00:17:34.667 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:34.667 "is_configured": true, 00:17:34.667 "data_offset": 2048, 00:17:34.667 "data_size": 63488 00:17:34.667 }, 00:17:34.667 { 00:17:34.667 "name": "BaseBdev3", 00:17:34.667 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:34.667 "is_configured": true, 00:17:34.667 "data_offset": 2048, 00:17:34.667 "data_size": 63488 00:17:34.667 }, 00:17:34.667 { 00:17:34.667 "name": "BaseBdev4", 00:17:34.667 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:34.667 "is_configured": true, 00:17:34.667 "data_offset": 2048, 00:17:34.667 "data_size": 63488 00:17:34.667 } 00:17:34.667 ] 00:17:34.667 }' 00:17:34.667 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.667 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.667 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.667 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.667 10:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.047 "name": "raid_bdev1", 00:17:36.047 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:36.047 "strip_size_kb": 64, 00:17:36.047 "state": "online", 00:17:36.047 "raid_level": "raid5f", 00:17:36.047 "superblock": true, 00:17:36.047 "num_base_bdevs": 4, 00:17:36.047 "num_base_bdevs_discovered": 4, 00:17:36.047 "num_base_bdevs_operational": 4, 00:17:36.047 "process": { 00:17:36.047 "type": "rebuild", 00:17:36.047 "target": "spare", 00:17:36.047 "progress": { 00:17:36.047 "blocks": 153600, 00:17:36.047 "percent": 80 00:17:36.047 } 00:17:36.047 }, 00:17:36.047 "base_bdevs_list": [ 00:17:36.047 { 00:17:36.047 "name": "spare", 00:17:36.047 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:36.047 "is_configured": true, 00:17:36.047 "data_offset": 2048, 00:17:36.047 "data_size": 63488 00:17:36.047 }, 00:17:36.047 { 00:17:36.047 "name": "BaseBdev2", 00:17:36.047 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:36.047 "is_configured": true, 00:17:36.047 "data_offset": 2048, 00:17:36.047 "data_size": 63488 00:17:36.047 }, 00:17:36.047 { 00:17:36.047 "name": "BaseBdev3", 00:17:36.047 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:36.047 "is_configured": true, 00:17:36.047 "data_offset": 2048, 00:17:36.047 "data_size": 63488 00:17:36.047 }, 00:17:36.047 { 00:17:36.047 "name": "BaseBdev4", 00:17:36.047 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:36.047 "is_configured": true, 00:17:36.047 "data_offset": 2048, 00:17:36.047 "data_size": 63488 00:17:36.047 } 00:17:36.047 ] 00:17:36.047 }' 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.047 10:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.988 "name": "raid_bdev1", 00:17:36.988 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:36.988 "strip_size_kb": 64, 00:17:36.988 "state": "online", 00:17:36.988 "raid_level": "raid5f", 00:17:36.988 "superblock": true, 00:17:36.988 "num_base_bdevs": 4, 00:17:36.988 "num_base_bdevs_discovered": 4, 00:17:36.988 "num_base_bdevs_operational": 4, 00:17:36.988 "process": { 00:17:36.988 "type": "rebuild", 00:17:36.988 "target": "spare", 00:17:36.988 "progress": { 00:17:36.988 "blocks": 174720, 00:17:36.988 "percent": 91 00:17:36.988 } 00:17:36.988 }, 00:17:36.988 "base_bdevs_list": [ 00:17:36.988 { 00:17:36.988 "name": "spare", 00:17:36.988 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:36.988 "is_configured": true, 00:17:36.988 "data_offset": 2048, 00:17:36.988 "data_size": 63488 00:17:36.988 }, 00:17:36.988 { 00:17:36.988 "name": "BaseBdev2", 00:17:36.988 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:36.988 "is_configured": true, 00:17:36.988 "data_offset": 2048, 00:17:36.988 "data_size": 63488 00:17:36.988 }, 00:17:36.988 { 00:17:36.988 "name": "BaseBdev3", 00:17:36.988 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:36.988 "is_configured": true, 00:17:36.988 "data_offset": 2048, 00:17:36.988 "data_size": 63488 00:17:36.988 }, 00:17:36.988 { 00:17:36.988 "name": "BaseBdev4", 00:17:36.988 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:36.988 "is_configured": true, 00:17:36.988 "data_offset": 2048, 00:17:36.988 "data_size": 63488 00:17:36.988 } 00:17:36.988 ] 00:17:36.988 }' 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.988 10:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.927 [2024-10-21 10:02:14.248507] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:37.927 [2024-10-21 10:02:14.248618] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:37.927 [2024-10-21 10:02:14.248794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.191 "name": "raid_bdev1", 00:17:38.191 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:38.191 "strip_size_kb": 64, 00:17:38.191 "state": "online", 00:17:38.191 "raid_level": "raid5f", 00:17:38.191 "superblock": true, 00:17:38.191 "num_base_bdevs": 4, 00:17:38.191 "num_base_bdevs_discovered": 4, 00:17:38.191 "num_base_bdevs_operational": 4, 00:17:38.191 "base_bdevs_list": [ 00:17:38.191 { 00:17:38.191 "name": "spare", 00:17:38.191 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:38.191 "is_configured": true, 00:17:38.191 "data_offset": 2048, 00:17:38.191 "data_size": 63488 00:17:38.191 }, 00:17:38.191 { 00:17:38.191 "name": "BaseBdev2", 00:17:38.191 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:38.191 "is_configured": true, 00:17:38.191 "data_offset": 2048, 00:17:38.191 "data_size": 63488 00:17:38.191 }, 00:17:38.191 { 00:17:38.191 "name": "BaseBdev3", 00:17:38.191 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:38.191 "is_configured": true, 00:17:38.191 "data_offset": 2048, 00:17:38.191 "data_size": 63488 00:17:38.191 }, 00:17:38.191 { 00:17:38.191 "name": "BaseBdev4", 00:17:38.191 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:38.191 "is_configured": true, 00:17:38.191 "data_offset": 2048, 00:17:38.191 "data_size": 63488 00:17:38.191 } 00:17:38.191 ] 00:17:38.191 }' 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.191 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.468 "name": "raid_bdev1", 00:17:38.468 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:38.468 "strip_size_kb": 64, 00:17:38.468 "state": "online", 00:17:38.468 "raid_level": "raid5f", 00:17:38.468 "superblock": true, 00:17:38.468 "num_base_bdevs": 4, 00:17:38.468 "num_base_bdevs_discovered": 4, 00:17:38.468 "num_base_bdevs_operational": 4, 00:17:38.468 "base_bdevs_list": [ 00:17:38.468 { 00:17:38.468 "name": "spare", 00:17:38.468 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:38.468 "is_configured": true, 00:17:38.468 "data_offset": 2048, 00:17:38.468 "data_size": 63488 00:17:38.468 }, 00:17:38.468 { 00:17:38.468 "name": "BaseBdev2", 00:17:38.468 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:38.468 "is_configured": true, 00:17:38.468 "data_offset": 2048, 00:17:38.468 "data_size": 63488 00:17:38.468 }, 00:17:38.468 { 00:17:38.468 "name": "BaseBdev3", 00:17:38.468 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:38.468 "is_configured": true, 00:17:38.468 "data_offset": 2048, 00:17:38.468 "data_size": 63488 00:17:38.468 }, 00:17:38.468 { 00:17:38.468 "name": "BaseBdev4", 00:17:38.468 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:38.468 "is_configured": true, 00:17:38.468 "data_offset": 2048, 00:17:38.468 "data_size": 63488 00:17:38.468 } 00:17:38.468 ] 00:17:38.468 }' 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.468 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.469 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.469 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.469 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.469 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.469 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.469 "name": "raid_bdev1", 00:17:38.469 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:38.469 "strip_size_kb": 64, 00:17:38.469 "state": "online", 00:17:38.469 "raid_level": "raid5f", 00:17:38.469 "superblock": true, 00:17:38.469 "num_base_bdevs": 4, 00:17:38.469 "num_base_bdevs_discovered": 4, 00:17:38.469 "num_base_bdevs_operational": 4, 00:17:38.469 "base_bdevs_list": [ 00:17:38.469 { 00:17:38.469 "name": "spare", 00:17:38.469 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:38.469 "is_configured": true, 00:17:38.469 "data_offset": 2048, 00:17:38.469 "data_size": 63488 00:17:38.469 }, 00:17:38.469 { 00:17:38.469 "name": "BaseBdev2", 00:17:38.469 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:38.469 "is_configured": true, 00:17:38.469 "data_offset": 2048, 00:17:38.469 "data_size": 63488 00:17:38.469 }, 00:17:38.469 { 00:17:38.469 "name": "BaseBdev3", 00:17:38.469 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:38.469 "is_configured": true, 00:17:38.469 "data_offset": 2048, 00:17:38.469 "data_size": 63488 00:17:38.469 }, 00:17:38.469 { 00:17:38.469 "name": "BaseBdev4", 00:17:38.469 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:38.469 "is_configured": true, 00:17:38.469 "data_offset": 2048, 00:17:38.469 "data_size": 63488 00:17:38.469 } 00:17:38.469 ] 00:17:38.469 }' 00:17:38.469 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.469 10:02:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:39.051 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.051 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.051 [2024-10-21 10:02:15.377736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.051 [2024-10-21 10:02:15.377784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.051 [2024-10-21 10:02:15.377909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.051 [2024-10-21 10:02:15.378047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.051 [2024-10-21 10:02:15.378068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:17:39.051 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.051 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.051 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:39.052 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:39.312 /dev/nbd0 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:39.312 1+0 records in 00:17:39.312 1+0 records out 00:17:39.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239408 s, 17.1 MB/s 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:39.312 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:39.573 /dev/nbd1 00:17:39.573 10:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:39.573 1+0 records in 00:17:39.573 1+0 records out 00:17:39.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472611 s, 8.7 MB/s 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:39.573 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:39.833 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:39.833 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:39.833 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:39.833 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:39.833 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:39.833 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:39.833 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:40.093 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:40.093 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:40.093 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:40.093 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.093 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.093 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:40.093 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:40.093 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.093 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:40.093 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.353 [2024-10-21 10:02:16.805768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:40.353 [2024-10-21 10:02:16.805846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.353 [2024-10-21 10:02:16.805874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:40.353 [2024-10-21 10:02:16.805887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.353 [2024-10-21 10:02:16.809062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.353 [2024-10-21 10:02:16.809103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:40.353 [2024-10-21 10:02:16.809222] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:40.353 [2024-10-21 10:02:16.809284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.353 [2024-10-21 10:02:16.809497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.353 [2024-10-21 10:02:16.809639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.353 [2024-10-21 10:02:16.809733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:40.353 spare 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.353 [2024-10-21 10:02:16.909696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:17:40.353 [2024-10-21 10:02:16.909736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:40.353 [2024-10-21 10:02:16.910146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000048f60 00:17:40.353 [2024-10-21 10:02:16.921054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:17:40.353 [2024-10-21 10:02:16.921083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005f00 00:17:40.353 [2024-10-21 10:02:16.921333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.353 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.354 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.354 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.354 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.354 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.354 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.354 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.354 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.354 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.614 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.614 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.614 "name": "raid_bdev1", 00:17:40.614 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:40.614 "strip_size_kb": 64, 00:17:40.614 "state": "online", 00:17:40.614 "raid_level": "raid5f", 00:17:40.614 "superblock": true, 00:17:40.614 "num_base_bdevs": 4, 00:17:40.614 "num_base_bdevs_discovered": 4, 00:17:40.614 "num_base_bdevs_operational": 4, 00:17:40.614 "base_bdevs_list": [ 00:17:40.614 { 00:17:40.614 "name": "spare", 00:17:40.614 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:40.614 "is_configured": true, 00:17:40.614 "data_offset": 2048, 00:17:40.614 "data_size": 63488 00:17:40.614 }, 00:17:40.614 { 00:17:40.614 "name": "BaseBdev2", 00:17:40.614 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:40.614 "is_configured": true, 00:17:40.614 "data_offset": 2048, 00:17:40.614 "data_size": 63488 00:17:40.614 }, 00:17:40.614 { 00:17:40.614 "name": "BaseBdev3", 00:17:40.614 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:40.614 "is_configured": true, 00:17:40.614 "data_offset": 2048, 00:17:40.614 "data_size": 63488 00:17:40.614 }, 00:17:40.614 { 00:17:40.614 "name": "BaseBdev4", 00:17:40.614 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:40.614 "is_configured": true, 00:17:40.614 "data_offset": 2048, 00:17:40.614 "data_size": 63488 00:17:40.614 } 00:17:40.614 ] 00:17:40.614 }' 00:17:40.614 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.614 10:02:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.874 "name": "raid_bdev1", 00:17:40.874 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:40.874 "strip_size_kb": 64, 00:17:40.874 "state": "online", 00:17:40.874 "raid_level": "raid5f", 00:17:40.874 "superblock": true, 00:17:40.874 "num_base_bdevs": 4, 00:17:40.874 "num_base_bdevs_discovered": 4, 00:17:40.874 "num_base_bdevs_operational": 4, 00:17:40.874 "base_bdevs_list": [ 00:17:40.874 { 00:17:40.874 "name": "spare", 00:17:40.874 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:40.874 "is_configured": true, 00:17:40.874 "data_offset": 2048, 00:17:40.874 "data_size": 63488 00:17:40.874 }, 00:17:40.874 { 00:17:40.874 "name": "BaseBdev2", 00:17:40.874 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:40.874 "is_configured": true, 00:17:40.874 "data_offset": 2048, 00:17:40.874 "data_size": 63488 00:17:40.874 }, 00:17:40.874 { 00:17:40.874 "name": "BaseBdev3", 00:17:40.874 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:40.874 "is_configured": true, 00:17:40.874 "data_offset": 2048, 00:17:40.874 "data_size": 63488 00:17:40.874 }, 00:17:40.874 { 00:17:40.874 "name": "BaseBdev4", 00:17:40.874 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:40.874 "is_configured": true, 00:17:40.874 "data_offset": 2048, 00:17:40.874 "data_size": 63488 00:17:40.874 } 00:17:40.874 ] 00:17:40.874 }' 00:17:40.874 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.134 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.135 [2024-10-21 10:02:17.604353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.135 "name": "raid_bdev1", 00:17:41.135 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:41.135 "strip_size_kb": 64, 00:17:41.135 "state": "online", 00:17:41.135 "raid_level": "raid5f", 00:17:41.135 "superblock": true, 00:17:41.135 "num_base_bdevs": 4, 00:17:41.135 "num_base_bdevs_discovered": 3, 00:17:41.135 "num_base_bdevs_operational": 3, 00:17:41.135 "base_bdevs_list": [ 00:17:41.135 { 00:17:41.135 "name": null, 00:17:41.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.135 "is_configured": false, 00:17:41.135 "data_offset": 0, 00:17:41.135 "data_size": 63488 00:17:41.135 }, 00:17:41.135 { 00:17:41.135 "name": "BaseBdev2", 00:17:41.135 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:41.135 "is_configured": true, 00:17:41.135 "data_offset": 2048, 00:17:41.135 "data_size": 63488 00:17:41.135 }, 00:17:41.135 { 00:17:41.135 "name": "BaseBdev3", 00:17:41.135 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:41.135 "is_configured": true, 00:17:41.135 "data_offset": 2048, 00:17:41.135 "data_size": 63488 00:17:41.135 }, 00:17:41.135 { 00:17:41.135 "name": "BaseBdev4", 00:17:41.135 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:41.135 "is_configured": true, 00:17:41.135 "data_offset": 2048, 00:17:41.135 "data_size": 63488 00:17:41.135 } 00:17:41.135 ] 00:17:41.135 }' 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.135 10:02:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.705 10:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:41.705 10:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.705 10:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.705 [2024-10-21 10:02:18.083649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:41.705 [2024-10-21 10:02:18.083952] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:41.705 [2024-10-21 10:02:18.083987] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:41.705 [2024-10-21 10:02:18.084040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:41.705 [2024-10-21 10:02:18.105850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:17:41.705 10:02:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.705 10:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:41.705 [2024-10-21 10:02:18.119515] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.643 "name": "raid_bdev1", 00:17:42.643 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:42.643 "strip_size_kb": 64, 00:17:42.643 "state": "online", 00:17:42.643 "raid_level": "raid5f", 00:17:42.643 "superblock": true, 00:17:42.643 "num_base_bdevs": 4, 00:17:42.643 "num_base_bdevs_discovered": 4, 00:17:42.643 "num_base_bdevs_operational": 4, 00:17:42.643 "process": { 00:17:42.643 "type": "rebuild", 00:17:42.643 "target": "spare", 00:17:42.643 "progress": { 00:17:42.643 "blocks": 17280, 00:17:42.643 "percent": 9 00:17:42.643 } 00:17:42.643 }, 00:17:42.643 "base_bdevs_list": [ 00:17:42.643 { 00:17:42.643 "name": "spare", 00:17:42.643 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:42.643 "is_configured": true, 00:17:42.643 "data_offset": 2048, 00:17:42.643 "data_size": 63488 00:17:42.643 }, 00:17:42.643 { 00:17:42.643 "name": "BaseBdev2", 00:17:42.643 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:42.643 "is_configured": true, 00:17:42.643 "data_offset": 2048, 00:17:42.643 "data_size": 63488 00:17:42.643 }, 00:17:42.643 { 00:17:42.643 "name": "BaseBdev3", 00:17:42.643 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:42.643 "is_configured": true, 00:17:42.643 "data_offset": 2048, 00:17:42.643 "data_size": 63488 00:17:42.643 }, 00:17:42.643 { 00:17:42.643 "name": "BaseBdev4", 00:17:42.643 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:42.643 "is_configured": true, 00:17:42.643 "data_offset": 2048, 00:17:42.643 "data_size": 63488 00:17:42.643 } 00:17:42.643 ] 00:17:42.643 }' 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.643 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.903 [2024-10-21 10:02:19.271249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.903 [2024-10-21 10:02:19.329798] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:42.903 [2024-10-21 10:02:19.329877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.903 [2024-10-21 10:02:19.329900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.903 [2024-10-21 10:02:19.329929] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.903 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.904 "name": "raid_bdev1", 00:17:42.904 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:42.904 "strip_size_kb": 64, 00:17:42.904 "state": "online", 00:17:42.904 "raid_level": "raid5f", 00:17:42.904 "superblock": true, 00:17:42.904 "num_base_bdevs": 4, 00:17:42.904 "num_base_bdevs_discovered": 3, 00:17:42.904 "num_base_bdevs_operational": 3, 00:17:42.904 "base_bdevs_list": [ 00:17:42.904 { 00:17:42.904 "name": null, 00:17:42.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.904 "is_configured": false, 00:17:42.904 "data_offset": 0, 00:17:42.904 "data_size": 63488 00:17:42.904 }, 00:17:42.904 { 00:17:42.904 "name": "BaseBdev2", 00:17:42.904 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:42.904 "is_configured": true, 00:17:42.904 "data_offset": 2048, 00:17:42.904 "data_size": 63488 00:17:42.904 }, 00:17:42.904 { 00:17:42.904 "name": "BaseBdev3", 00:17:42.904 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:42.904 "is_configured": true, 00:17:42.904 "data_offset": 2048, 00:17:42.904 "data_size": 63488 00:17:42.904 }, 00:17:42.904 { 00:17:42.904 "name": "BaseBdev4", 00:17:42.904 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:42.904 "is_configured": true, 00:17:42.904 "data_offset": 2048, 00:17:42.904 "data_size": 63488 00:17:42.904 } 00:17:42.904 ] 00:17:42.904 }' 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.904 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.473 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:43.473 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.473 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.473 [2024-10-21 10:02:19.860003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:43.473 [2024-10-21 10:02:19.860168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.473 [2024-10-21 10:02:19.860256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:43.473 [2024-10-21 10:02:19.860305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.473 [2024-10-21 10:02:19.861058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.473 [2024-10-21 10:02:19.861141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:43.473 [2024-10-21 10:02:19.861326] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:43.473 [2024-10-21 10:02:19.861387] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:43.473 [2024-10-21 10:02:19.861445] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:43.473 [2024-10-21 10:02:19.861543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.473 [2024-10-21 10:02:19.883128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:17:43.473 spare 00:17:43.473 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.473 10:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:43.473 [2024-10-21 10:02:19.896388] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.410 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.410 "name": "raid_bdev1", 00:17:44.410 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:44.410 "strip_size_kb": 64, 00:17:44.410 "state": "online", 00:17:44.410 "raid_level": "raid5f", 00:17:44.410 "superblock": true, 00:17:44.410 "num_base_bdevs": 4, 00:17:44.410 "num_base_bdevs_discovered": 4, 00:17:44.410 "num_base_bdevs_operational": 4, 00:17:44.410 "process": { 00:17:44.410 "type": "rebuild", 00:17:44.410 "target": "spare", 00:17:44.410 "progress": { 00:17:44.410 "blocks": 19200, 00:17:44.410 "percent": 10 00:17:44.410 } 00:17:44.410 }, 00:17:44.410 "base_bdevs_list": [ 00:17:44.410 { 00:17:44.410 "name": "spare", 00:17:44.411 "uuid": "61b8df2d-1bcd-567b-a45e-12e962b3db5f", 00:17:44.411 "is_configured": true, 00:17:44.411 "data_offset": 2048, 00:17:44.411 "data_size": 63488 00:17:44.411 }, 00:17:44.411 { 00:17:44.411 "name": "BaseBdev2", 00:17:44.411 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:44.411 "is_configured": true, 00:17:44.411 "data_offset": 2048, 00:17:44.411 "data_size": 63488 00:17:44.411 }, 00:17:44.411 { 00:17:44.411 "name": "BaseBdev3", 00:17:44.411 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:44.411 "is_configured": true, 00:17:44.411 "data_offset": 2048, 00:17:44.411 "data_size": 63488 00:17:44.411 }, 00:17:44.411 { 00:17:44.411 "name": "BaseBdev4", 00:17:44.411 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:44.411 "is_configured": true, 00:17:44.411 "data_offset": 2048, 00:17:44.411 "data_size": 63488 00:17:44.411 } 00:17:44.411 ] 00:17:44.411 }' 00:17:44.411 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.411 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.411 10:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.671 [2024-10-21 10:02:21.051821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.671 [2024-10-21 10:02:21.106610] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:44.671 [2024-10-21 10:02:21.106756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.671 [2024-10-21 10:02:21.106824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.671 [2024-10-21 10:02:21.106866] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.671 "name": "raid_bdev1", 00:17:44.671 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:44.671 "strip_size_kb": 64, 00:17:44.671 "state": "online", 00:17:44.671 "raid_level": "raid5f", 00:17:44.671 "superblock": true, 00:17:44.671 "num_base_bdevs": 4, 00:17:44.671 "num_base_bdevs_discovered": 3, 00:17:44.671 "num_base_bdevs_operational": 3, 00:17:44.671 "base_bdevs_list": [ 00:17:44.671 { 00:17:44.671 "name": null, 00:17:44.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.671 "is_configured": false, 00:17:44.671 "data_offset": 0, 00:17:44.671 "data_size": 63488 00:17:44.671 }, 00:17:44.671 { 00:17:44.671 "name": "BaseBdev2", 00:17:44.671 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:44.671 "is_configured": true, 00:17:44.671 "data_offset": 2048, 00:17:44.671 "data_size": 63488 00:17:44.671 }, 00:17:44.671 { 00:17:44.671 "name": "BaseBdev3", 00:17:44.671 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:44.671 "is_configured": true, 00:17:44.671 "data_offset": 2048, 00:17:44.671 "data_size": 63488 00:17:44.671 }, 00:17:44.671 { 00:17:44.671 "name": "BaseBdev4", 00:17:44.671 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:44.671 "is_configured": true, 00:17:44.671 "data_offset": 2048, 00:17:44.671 "data_size": 63488 00:17:44.671 } 00:17:44.671 ] 00:17:44.671 }' 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.671 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.240 "name": "raid_bdev1", 00:17:45.240 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:45.240 "strip_size_kb": 64, 00:17:45.240 "state": "online", 00:17:45.240 "raid_level": "raid5f", 00:17:45.240 "superblock": true, 00:17:45.240 "num_base_bdevs": 4, 00:17:45.240 "num_base_bdevs_discovered": 3, 00:17:45.240 "num_base_bdevs_operational": 3, 00:17:45.240 "base_bdevs_list": [ 00:17:45.240 { 00:17:45.240 "name": null, 00:17:45.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.240 "is_configured": false, 00:17:45.240 "data_offset": 0, 00:17:45.240 "data_size": 63488 00:17:45.240 }, 00:17:45.240 { 00:17:45.240 "name": "BaseBdev2", 00:17:45.240 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:45.240 "is_configured": true, 00:17:45.240 "data_offset": 2048, 00:17:45.240 "data_size": 63488 00:17:45.240 }, 00:17:45.240 { 00:17:45.240 "name": "BaseBdev3", 00:17:45.240 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:45.240 "is_configured": true, 00:17:45.240 "data_offset": 2048, 00:17:45.240 "data_size": 63488 00:17:45.240 }, 00:17:45.240 { 00:17:45.240 "name": "BaseBdev4", 00:17:45.240 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:45.240 "is_configured": true, 00:17:45.240 "data_offset": 2048, 00:17:45.240 "data_size": 63488 00:17:45.240 } 00:17:45.240 ] 00:17:45.240 }' 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.240 [2024-10-21 10:02:21.784719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:45.240 [2024-10-21 10:02:21.784791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.240 [2024-10-21 10:02:21.784823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:45.240 [2024-10-21 10:02:21.784836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.240 [2024-10-21 10:02:21.785507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.240 [2024-10-21 10:02:21.785538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:45.240 [2024-10-21 10:02:21.785671] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:45.240 [2024-10-21 10:02:21.785692] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.240 [2024-10-21 10:02:21.785706] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:45.240 [2024-10-21 10:02:21.785720] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:45.240 BaseBdev1 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.240 10:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.621 "name": "raid_bdev1", 00:17:46.621 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:46.621 "strip_size_kb": 64, 00:17:46.621 "state": "online", 00:17:46.621 "raid_level": "raid5f", 00:17:46.621 "superblock": true, 00:17:46.621 "num_base_bdevs": 4, 00:17:46.621 "num_base_bdevs_discovered": 3, 00:17:46.621 "num_base_bdevs_operational": 3, 00:17:46.621 "base_bdevs_list": [ 00:17:46.621 { 00:17:46.621 "name": null, 00:17:46.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.621 "is_configured": false, 00:17:46.621 "data_offset": 0, 00:17:46.621 "data_size": 63488 00:17:46.621 }, 00:17:46.621 { 00:17:46.621 "name": "BaseBdev2", 00:17:46.621 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:46.621 "is_configured": true, 00:17:46.621 "data_offset": 2048, 00:17:46.621 "data_size": 63488 00:17:46.621 }, 00:17:46.621 { 00:17:46.621 "name": "BaseBdev3", 00:17:46.621 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:46.621 "is_configured": true, 00:17:46.621 "data_offset": 2048, 00:17:46.621 "data_size": 63488 00:17:46.621 }, 00:17:46.621 { 00:17:46.621 "name": "BaseBdev4", 00:17:46.621 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:46.621 "is_configured": true, 00:17:46.621 "data_offset": 2048, 00:17:46.621 "data_size": 63488 00:17:46.621 } 00:17:46.621 ] 00:17:46.621 }' 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.621 10:02:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.881 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.881 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.881 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.881 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.881 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.881 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.881 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.881 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.881 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.882 "name": "raid_bdev1", 00:17:46.882 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:46.882 "strip_size_kb": 64, 00:17:46.882 "state": "online", 00:17:46.882 "raid_level": "raid5f", 00:17:46.882 "superblock": true, 00:17:46.882 "num_base_bdevs": 4, 00:17:46.882 "num_base_bdevs_discovered": 3, 00:17:46.882 "num_base_bdevs_operational": 3, 00:17:46.882 "base_bdevs_list": [ 00:17:46.882 { 00:17:46.882 "name": null, 00:17:46.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.882 "is_configured": false, 00:17:46.882 "data_offset": 0, 00:17:46.882 "data_size": 63488 00:17:46.882 }, 00:17:46.882 { 00:17:46.882 "name": "BaseBdev2", 00:17:46.882 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:46.882 "is_configured": true, 00:17:46.882 "data_offset": 2048, 00:17:46.882 "data_size": 63488 00:17:46.882 }, 00:17:46.882 { 00:17:46.882 "name": "BaseBdev3", 00:17:46.882 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:46.882 "is_configured": true, 00:17:46.882 "data_offset": 2048, 00:17:46.882 "data_size": 63488 00:17:46.882 }, 00:17:46.882 { 00:17:46.882 "name": "BaseBdev4", 00:17:46.882 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:46.882 "is_configured": true, 00:17:46.882 "data_offset": 2048, 00:17:46.882 "data_size": 63488 00:17:46.882 } 00:17:46.882 ] 00:17:46.882 }' 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.882 [2024-10-21 10:02:23.466402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.882 [2024-10-21 10:02:23.466718] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:46.882 [2024-10-21 10:02:23.466746] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:46.882 request: 00:17:46.882 { 00:17:46.882 "base_bdev": "BaseBdev1", 00:17:46.882 "raid_bdev": "raid_bdev1", 00:17:46.882 "method": "bdev_raid_add_base_bdev", 00:17:46.882 "req_id": 1 00:17:46.882 } 00:17:46.882 Got JSON-RPC error response 00:17:46.882 response: 00:17:46.882 { 00:17:46.882 "code": -22, 00:17:46.882 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:46.882 } 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.882 10:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:48.261 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.262 "name": "raid_bdev1", 00:17:48.262 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:48.262 "strip_size_kb": 64, 00:17:48.262 "state": "online", 00:17:48.262 "raid_level": "raid5f", 00:17:48.262 "superblock": true, 00:17:48.262 "num_base_bdevs": 4, 00:17:48.262 "num_base_bdevs_discovered": 3, 00:17:48.262 "num_base_bdevs_operational": 3, 00:17:48.262 "base_bdevs_list": [ 00:17:48.262 { 00:17:48.262 "name": null, 00:17:48.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.262 "is_configured": false, 00:17:48.262 "data_offset": 0, 00:17:48.262 "data_size": 63488 00:17:48.262 }, 00:17:48.262 { 00:17:48.262 "name": "BaseBdev2", 00:17:48.262 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:48.262 "is_configured": true, 00:17:48.262 "data_offset": 2048, 00:17:48.262 "data_size": 63488 00:17:48.262 }, 00:17:48.262 { 00:17:48.262 "name": "BaseBdev3", 00:17:48.262 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:48.262 "is_configured": true, 00:17:48.262 "data_offset": 2048, 00:17:48.262 "data_size": 63488 00:17:48.262 }, 00:17:48.262 { 00:17:48.262 "name": "BaseBdev4", 00:17:48.262 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:48.262 "is_configured": true, 00:17:48.262 "data_offset": 2048, 00:17:48.262 "data_size": 63488 00:17:48.262 } 00:17:48.262 ] 00:17:48.262 }' 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.262 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.521 "name": "raid_bdev1", 00:17:48.521 "uuid": "f2a50a58-dd1d-48c4-9223-821e4d32aa46", 00:17:48.521 "strip_size_kb": 64, 00:17:48.521 "state": "online", 00:17:48.521 "raid_level": "raid5f", 00:17:48.521 "superblock": true, 00:17:48.521 "num_base_bdevs": 4, 00:17:48.521 "num_base_bdevs_discovered": 3, 00:17:48.521 "num_base_bdevs_operational": 3, 00:17:48.521 "base_bdevs_list": [ 00:17:48.521 { 00:17:48.521 "name": null, 00:17:48.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.521 "is_configured": false, 00:17:48.521 "data_offset": 0, 00:17:48.521 "data_size": 63488 00:17:48.521 }, 00:17:48.521 { 00:17:48.521 "name": "BaseBdev2", 00:17:48.521 "uuid": "29ac600c-f126-5667-ad4f-72a40bd143a7", 00:17:48.521 "is_configured": true, 00:17:48.521 "data_offset": 2048, 00:17:48.521 "data_size": 63488 00:17:48.521 }, 00:17:48.521 { 00:17:48.521 "name": "BaseBdev3", 00:17:48.521 "uuid": "1d06a79a-bd1e-52f0-a396-f61c0c4adc06", 00:17:48.521 "is_configured": true, 00:17:48.521 "data_offset": 2048, 00:17:48.521 "data_size": 63488 00:17:48.521 }, 00:17:48.521 { 00:17:48.521 "name": "BaseBdev4", 00:17:48.521 "uuid": "6ffb1d0b-c554-5f71-b6a9-78a5b435db7e", 00:17:48.521 "is_configured": true, 00:17:48.521 "data_offset": 2048, 00:17:48.521 "data_size": 63488 00:17:48.521 } 00:17:48.521 ] 00:17:48.521 }' 00:17:48.521 10:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.521 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.521 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.521 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.521 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84781 00:17:48.521 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84781 ']' 00:17:48.521 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 84781 00:17:48.521 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:48.521 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.521 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84781 00:17:48.780 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:48.781 killing process with pid 84781 00:17:48.781 Received shutdown signal, test time was about 60.000000 seconds 00:17:48.781 00:17:48.781 Latency(us) 00:17:48.781 [2024-10-21T10:02:25.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.781 [2024-10-21T10:02:25.376Z] =================================================================================================================== 00:17:48.781 [2024-10-21T10:02:25.376Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.781 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:48.781 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84781' 00:17:48.781 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 84781 00:17:48.781 [2024-10-21 10:02:25.120604] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.781 [2024-10-21 10:02:25.120778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.781 [2024-10-21 10:02:25.120882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.781 [2024-10-21 10:02:25.120899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state offline 00:17:48.781 10:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 84781 00:17:49.348 [2024-10-21 10:02:25.780025] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:50.728 10:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:50.728 00:17:50.728 real 0m28.668s 00:17:50.728 user 0m35.791s 00:17:50.728 sys 0m3.576s 00:17:50.728 10:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.728 10:02:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.728 ************************************ 00:17:50.728 END TEST raid5f_rebuild_test_sb 00:17:50.728 ************************************ 00:17:50.988 10:02:27 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:50.988 10:02:27 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:50.988 10:02:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:50.988 10:02:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.988 10:02:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.988 ************************************ 00:17:50.988 START TEST raid_state_function_test_sb_4k 00:17:50.988 ************************************ 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85602 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85602' 00:17:50.988 Process raid pid: 85602 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85602 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 85602 ']' 00:17:50.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.988 10:02:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.988 [2024-10-21 10:02:27.464092] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:17:50.988 [2024-10-21 10:02:27.464253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.249 [2024-10-21 10:02:27.639303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.249 [2024-10-21 10:02:27.805052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.819 [2024-10-21 10:02:28.115159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.819 [2024-10-21 10:02:28.115223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.819 [2024-10-21 10:02:28.380957] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.819 [2024-10-21 10:02:28.381024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.819 [2024-10-21 10:02:28.381037] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:51.819 [2024-10-21 10:02:28.381050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.819 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.078 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.078 "name": "Existed_Raid", 00:17:52.078 "uuid": "460d9877-d106-4222-8923-eddc5c020774", 00:17:52.078 "strip_size_kb": 0, 00:17:52.078 "state": "configuring", 00:17:52.078 "raid_level": "raid1", 00:17:52.078 "superblock": true, 00:17:52.078 "num_base_bdevs": 2, 00:17:52.078 "num_base_bdevs_discovered": 0, 00:17:52.078 "num_base_bdevs_operational": 2, 00:17:52.078 "base_bdevs_list": [ 00:17:52.078 { 00:17:52.078 "name": "BaseBdev1", 00:17:52.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.078 "is_configured": false, 00:17:52.078 "data_offset": 0, 00:17:52.078 "data_size": 0 00:17:52.078 }, 00:17:52.078 { 00:17:52.078 "name": "BaseBdev2", 00:17:52.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.078 "is_configured": false, 00:17:52.078 "data_offset": 0, 00:17:52.078 "data_size": 0 00:17:52.078 } 00:17:52.078 ] 00:17:52.078 }' 00:17:52.078 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.078 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.339 [2024-10-21 10:02:28.872096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:52.339 [2024-10-21 10:02:28.872209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.339 [2024-10-21 10:02:28.884107] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.339 [2024-10-21 10:02:28.884159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.339 [2024-10-21 10:02:28.884170] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.339 [2024-10-21 10:02:28.884185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.339 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.599 [2024-10-21 10:02:28.950682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.599 BaseBdev1 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.599 [ 00:17:52.599 { 00:17:52.599 "name": "BaseBdev1", 00:17:52.599 "aliases": [ 00:17:52.599 "01943133-1233-4ba9-83eb-8823be4eb033" 00:17:52.599 ], 00:17:52.599 "product_name": "Malloc disk", 00:17:52.599 "block_size": 4096, 00:17:52.599 "num_blocks": 8192, 00:17:52.599 "uuid": "01943133-1233-4ba9-83eb-8823be4eb033", 00:17:52.599 "assigned_rate_limits": { 00:17:52.599 "rw_ios_per_sec": 0, 00:17:52.599 "rw_mbytes_per_sec": 0, 00:17:52.599 "r_mbytes_per_sec": 0, 00:17:52.599 "w_mbytes_per_sec": 0 00:17:52.599 }, 00:17:52.599 "claimed": true, 00:17:52.599 "claim_type": "exclusive_write", 00:17:52.599 "zoned": false, 00:17:52.599 "supported_io_types": { 00:17:52.599 "read": true, 00:17:52.599 "write": true, 00:17:52.599 "unmap": true, 00:17:52.599 "flush": true, 00:17:52.599 "reset": true, 00:17:52.599 "nvme_admin": false, 00:17:52.599 "nvme_io": false, 00:17:52.599 "nvme_io_md": false, 00:17:52.599 "write_zeroes": true, 00:17:52.599 "zcopy": true, 00:17:52.599 "get_zone_info": false, 00:17:52.599 "zone_management": false, 00:17:52.599 "zone_append": false, 00:17:52.599 "compare": false, 00:17:52.599 "compare_and_write": false, 00:17:52.599 "abort": true, 00:17:52.599 "seek_hole": false, 00:17:52.599 "seek_data": false, 00:17:52.599 "copy": true, 00:17:52.599 "nvme_iov_md": false 00:17:52.599 }, 00:17:52.599 "memory_domains": [ 00:17:52.599 { 00:17:52.599 "dma_device_id": "system", 00:17:52.599 "dma_device_type": 1 00:17:52.599 }, 00:17:52.599 { 00:17:52.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.599 "dma_device_type": 2 00:17:52.599 } 00:17:52.599 ], 00:17:52.599 "driver_specific": {} 00:17:52.599 } 00:17:52.599 ] 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.599 10:02:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.599 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.599 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.599 "name": "Existed_Raid", 00:17:52.599 "uuid": "5d1ea943-83b1-43b0-891d-142bacb2e817", 00:17:52.599 "strip_size_kb": 0, 00:17:52.599 "state": "configuring", 00:17:52.599 "raid_level": "raid1", 00:17:52.599 "superblock": true, 00:17:52.599 "num_base_bdevs": 2, 00:17:52.599 "num_base_bdevs_discovered": 1, 00:17:52.599 "num_base_bdevs_operational": 2, 00:17:52.599 "base_bdevs_list": [ 00:17:52.599 { 00:17:52.599 "name": "BaseBdev1", 00:17:52.599 "uuid": "01943133-1233-4ba9-83eb-8823be4eb033", 00:17:52.599 "is_configured": true, 00:17:52.599 "data_offset": 256, 00:17:52.599 "data_size": 7936 00:17:52.599 }, 00:17:52.599 { 00:17:52.599 "name": "BaseBdev2", 00:17:52.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.599 "is_configured": false, 00:17:52.599 "data_offset": 0, 00:17:52.599 "data_size": 0 00:17:52.599 } 00:17:52.599 ] 00:17:52.599 }' 00:17:52.599 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.599 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.859 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:52.859 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.859 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.859 [2024-10-21 10:02:29.449916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:52.859 [2024-10-21 10:02:29.450067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.118 [2024-10-21 10:02:29.457946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.118 [2024-10-21 10:02:29.460438] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.118 [2024-10-21 10:02:29.460492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.118 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.119 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.119 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.119 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.119 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.119 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.119 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.119 "name": "Existed_Raid", 00:17:53.119 "uuid": "a12f087f-b7e0-4709-8bf0-8d19bbd2ffb0", 00:17:53.119 "strip_size_kb": 0, 00:17:53.119 "state": "configuring", 00:17:53.119 "raid_level": "raid1", 00:17:53.119 "superblock": true, 00:17:53.119 "num_base_bdevs": 2, 00:17:53.119 "num_base_bdevs_discovered": 1, 00:17:53.119 "num_base_bdevs_operational": 2, 00:17:53.119 "base_bdevs_list": [ 00:17:53.119 { 00:17:53.119 "name": "BaseBdev1", 00:17:53.119 "uuid": "01943133-1233-4ba9-83eb-8823be4eb033", 00:17:53.119 "is_configured": true, 00:17:53.119 "data_offset": 256, 00:17:53.119 "data_size": 7936 00:17:53.119 }, 00:17:53.119 { 00:17:53.119 "name": "BaseBdev2", 00:17:53.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.119 "is_configured": false, 00:17:53.119 "data_offset": 0, 00:17:53.119 "data_size": 0 00:17:53.119 } 00:17:53.119 ] 00:17:53.119 }' 00:17:53.119 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.119 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.379 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:53.379 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.379 10:02:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.639 [2024-10-21 10:02:30.009125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.639 [2024-10-21 10:02:30.009432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:53.639 [2024-10-21 10:02:30.009450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.639 [2024-10-21 10:02:30.009811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:53.639 BaseBdev2 00:17:53.639 [2024-10-21 10:02:30.010014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:53.639 [2024-10-21 10:02:30.010036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:17:53.639 [2024-10-21 10:02:30.010221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.639 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.639 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:53.639 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:53.639 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:53.639 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:53.639 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.640 [ 00:17:53.640 { 00:17:53.640 "name": "BaseBdev2", 00:17:53.640 "aliases": [ 00:17:53.640 "499527c3-8ad8-43b1-bd4e-9122baab2946" 00:17:53.640 ], 00:17:53.640 "product_name": "Malloc disk", 00:17:53.640 "block_size": 4096, 00:17:53.640 "num_blocks": 8192, 00:17:53.640 "uuid": "499527c3-8ad8-43b1-bd4e-9122baab2946", 00:17:53.640 "assigned_rate_limits": { 00:17:53.640 "rw_ios_per_sec": 0, 00:17:53.640 "rw_mbytes_per_sec": 0, 00:17:53.640 "r_mbytes_per_sec": 0, 00:17:53.640 "w_mbytes_per_sec": 0 00:17:53.640 }, 00:17:53.640 "claimed": true, 00:17:53.640 "claim_type": "exclusive_write", 00:17:53.640 "zoned": false, 00:17:53.640 "supported_io_types": { 00:17:53.640 "read": true, 00:17:53.640 "write": true, 00:17:53.640 "unmap": true, 00:17:53.640 "flush": true, 00:17:53.640 "reset": true, 00:17:53.640 "nvme_admin": false, 00:17:53.640 "nvme_io": false, 00:17:53.640 "nvme_io_md": false, 00:17:53.640 "write_zeroes": true, 00:17:53.640 "zcopy": true, 00:17:53.640 "get_zone_info": false, 00:17:53.640 "zone_management": false, 00:17:53.640 "zone_append": false, 00:17:53.640 "compare": false, 00:17:53.640 "compare_and_write": false, 00:17:53.640 "abort": true, 00:17:53.640 "seek_hole": false, 00:17:53.640 "seek_data": false, 00:17:53.640 "copy": true, 00:17:53.640 "nvme_iov_md": false 00:17:53.640 }, 00:17:53.640 "memory_domains": [ 00:17:53.640 { 00:17:53.640 "dma_device_id": "system", 00:17:53.640 "dma_device_type": 1 00:17:53.640 }, 00:17:53.640 { 00:17:53.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.640 "dma_device_type": 2 00:17:53.640 } 00:17:53.640 ], 00:17:53.640 "driver_specific": {} 00:17:53.640 } 00:17:53.640 ] 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.640 "name": "Existed_Raid", 00:17:53.640 "uuid": "a12f087f-b7e0-4709-8bf0-8d19bbd2ffb0", 00:17:53.640 "strip_size_kb": 0, 00:17:53.640 "state": "online", 00:17:53.640 "raid_level": "raid1", 00:17:53.640 "superblock": true, 00:17:53.640 "num_base_bdevs": 2, 00:17:53.640 "num_base_bdevs_discovered": 2, 00:17:53.640 "num_base_bdevs_operational": 2, 00:17:53.640 "base_bdevs_list": [ 00:17:53.640 { 00:17:53.640 "name": "BaseBdev1", 00:17:53.640 "uuid": "01943133-1233-4ba9-83eb-8823be4eb033", 00:17:53.640 "is_configured": true, 00:17:53.640 "data_offset": 256, 00:17:53.640 "data_size": 7936 00:17:53.640 }, 00:17:53.640 { 00:17:53.640 "name": "BaseBdev2", 00:17:53.640 "uuid": "499527c3-8ad8-43b1-bd4e-9122baab2946", 00:17:53.640 "is_configured": true, 00:17:53.640 "data_offset": 256, 00:17:53.640 "data_size": 7936 00:17:53.640 } 00:17:53.640 ] 00:17:53.640 }' 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.640 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.899 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:53.899 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:53.899 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:53.899 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:53.899 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:53.899 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.160 [2024-10-21 10:02:30.504755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:54.160 "name": "Existed_Raid", 00:17:54.160 "aliases": [ 00:17:54.160 "a12f087f-b7e0-4709-8bf0-8d19bbd2ffb0" 00:17:54.160 ], 00:17:54.160 "product_name": "Raid Volume", 00:17:54.160 "block_size": 4096, 00:17:54.160 "num_blocks": 7936, 00:17:54.160 "uuid": "a12f087f-b7e0-4709-8bf0-8d19bbd2ffb0", 00:17:54.160 "assigned_rate_limits": { 00:17:54.160 "rw_ios_per_sec": 0, 00:17:54.160 "rw_mbytes_per_sec": 0, 00:17:54.160 "r_mbytes_per_sec": 0, 00:17:54.160 "w_mbytes_per_sec": 0 00:17:54.160 }, 00:17:54.160 "claimed": false, 00:17:54.160 "zoned": false, 00:17:54.160 "supported_io_types": { 00:17:54.160 "read": true, 00:17:54.160 "write": true, 00:17:54.160 "unmap": false, 00:17:54.160 "flush": false, 00:17:54.160 "reset": true, 00:17:54.160 "nvme_admin": false, 00:17:54.160 "nvme_io": false, 00:17:54.160 "nvme_io_md": false, 00:17:54.160 "write_zeroes": true, 00:17:54.160 "zcopy": false, 00:17:54.160 "get_zone_info": false, 00:17:54.160 "zone_management": false, 00:17:54.160 "zone_append": false, 00:17:54.160 "compare": false, 00:17:54.160 "compare_and_write": false, 00:17:54.160 "abort": false, 00:17:54.160 "seek_hole": false, 00:17:54.160 "seek_data": false, 00:17:54.160 "copy": false, 00:17:54.160 "nvme_iov_md": false 00:17:54.160 }, 00:17:54.160 "memory_domains": [ 00:17:54.160 { 00:17:54.160 "dma_device_id": "system", 00:17:54.160 "dma_device_type": 1 00:17:54.160 }, 00:17:54.160 { 00:17:54.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.160 "dma_device_type": 2 00:17:54.160 }, 00:17:54.160 { 00:17:54.160 "dma_device_id": "system", 00:17:54.160 "dma_device_type": 1 00:17:54.160 }, 00:17:54.160 { 00:17:54.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.160 "dma_device_type": 2 00:17:54.160 } 00:17:54.160 ], 00:17:54.160 "driver_specific": { 00:17:54.160 "raid": { 00:17:54.160 "uuid": "a12f087f-b7e0-4709-8bf0-8d19bbd2ffb0", 00:17:54.160 "strip_size_kb": 0, 00:17:54.160 "state": "online", 00:17:54.160 "raid_level": "raid1", 00:17:54.160 "superblock": true, 00:17:54.160 "num_base_bdevs": 2, 00:17:54.160 "num_base_bdevs_discovered": 2, 00:17:54.160 "num_base_bdevs_operational": 2, 00:17:54.160 "base_bdevs_list": [ 00:17:54.160 { 00:17:54.160 "name": "BaseBdev1", 00:17:54.160 "uuid": "01943133-1233-4ba9-83eb-8823be4eb033", 00:17:54.160 "is_configured": true, 00:17:54.160 "data_offset": 256, 00:17:54.160 "data_size": 7936 00:17:54.160 }, 00:17:54.160 { 00:17:54.160 "name": "BaseBdev2", 00:17:54.160 "uuid": "499527c3-8ad8-43b1-bd4e-9122baab2946", 00:17:54.160 "is_configured": true, 00:17:54.160 "data_offset": 256, 00:17:54.160 "data_size": 7936 00:17:54.160 } 00:17:54.160 ] 00:17:54.160 } 00:17:54.160 } 00:17:54.160 }' 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:54.160 BaseBdev2' 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.160 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.161 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.161 [2024-10-21 10:02:30.740021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.421 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.421 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:54.421 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:54.421 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:54.421 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:54.421 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:54.421 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:54.421 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.421 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.421 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.422 "name": "Existed_Raid", 00:17:54.422 "uuid": "a12f087f-b7e0-4709-8bf0-8d19bbd2ffb0", 00:17:54.422 "strip_size_kb": 0, 00:17:54.422 "state": "online", 00:17:54.422 "raid_level": "raid1", 00:17:54.422 "superblock": true, 00:17:54.422 "num_base_bdevs": 2, 00:17:54.422 "num_base_bdevs_discovered": 1, 00:17:54.422 "num_base_bdevs_operational": 1, 00:17:54.422 "base_bdevs_list": [ 00:17:54.422 { 00:17:54.422 "name": null, 00:17:54.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.422 "is_configured": false, 00:17:54.422 "data_offset": 0, 00:17:54.422 "data_size": 7936 00:17:54.422 }, 00:17:54.422 { 00:17:54.422 "name": "BaseBdev2", 00:17:54.422 "uuid": "499527c3-8ad8-43b1-bd4e-9122baab2946", 00:17:54.422 "is_configured": true, 00:17:54.422 "data_offset": 256, 00:17:54.422 "data_size": 7936 00:17:54.422 } 00:17:54.422 ] 00:17:54.422 }' 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.422 10:02:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.682 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:54.682 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:54.682 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.682 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.682 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:54.682 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.682 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.942 [2024-10-21 10:02:31.297166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:54.942 [2024-10-21 10:02:31.297290] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.942 [2024-10-21 10:02:31.414445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.942 [2024-10-21 10:02:31.414672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.942 [2024-10-21 10:02:31.414695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85602 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 85602 ']' 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 85602 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85602 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85602' 00:17:54.942 killing process with pid 85602 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 85602 00:17:54.942 10:02:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 85602 00:17:54.942 [2024-10-21 10:02:31.505956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.942 [2024-10-21 10:02:31.524846] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:56.851 10:02:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:56.851 00:17:56.851 real 0m5.610s 00:17:56.851 user 0m7.813s 00:17:56.851 sys 0m1.051s 00:17:56.851 10:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:56.851 ************************************ 00:17:56.851 END TEST raid_state_function_test_sb_4k 00:17:56.851 ************************************ 00:17:56.851 10:02:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.851 10:02:33 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:56.851 10:02:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:56.851 10:02:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:56.851 10:02:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.851 ************************************ 00:17:56.851 START TEST raid_superblock_test_4k 00:17:56.851 ************************************ 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:56.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85860 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85860 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 85860 ']' 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.851 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.851 [2024-10-21 10:02:33.146383] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:17:56.851 [2024-10-21 10:02:33.146676] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85860 ] 00:17:56.851 [2024-10-21 10:02:33.317098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.111 [2024-10-21 10:02:33.479786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.370 [2024-10-21 10:02:33.780915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.370 [2024-10-21 10:02:33.781077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.630 10:02:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.630 malloc1 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.630 [2024-10-21 10:02:34.056780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.630 [2024-10-21 10:02:34.056926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.630 [2024-10-21 10:02:34.056998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:57.630 [2024-10-21 10:02:34.057061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.630 [2024-10-21 10:02:34.059979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.630 [2024-10-21 10:02:34.060070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.630 pt1 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.630 malloc2 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.630 [2024-10-21 10:02:34.128346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.630 [2024-10-21 10:02:34.128476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.630 [2024-10-21 10:02:34.128542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:17:57.630 [2024-10-21 10:02:34.128604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.630 [2024-10-21 10:02:34.131341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.630 [2024-10-21 10:02:34.131430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.630 pt2 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.630 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.631 [2024-10-21 10:02:34.140409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.631 [2024-10-21 10:02:34.142782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.631 [2024-10-21 10:02:34.143055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:17:57.631 [2024-10-21 10:02:34.143115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:57.631 [2024-10-21 10:02:34.143483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:57.631 [2024-10-21 10:02:34.143787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:17:57.631 [2024-10-21 10:02:34.143852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:17:57.631 [2024-10-21 10:02:34.144126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.631 "name": "raid_bdev1", 00:17:57.631 "uuid": "02db8077-a0fd-461b-9659-eb1c5d509554", 00:17:57.631 "strip_size_kb": 0, 00:17:57.631 "state": "online", 00:17:57.631 "raid_level": "raid1", 00:17:57.631 "superblock": true, 00:17:57.631 "num_base_bdevs": 2, 00:17:57.631 "num_base_bdevs_discovered": 2, 00:17:57.631 "num_base_bdevs_operational": 2, 00:17:57.631 "base_bdevs_list": [ 00:17:57.631 { 00:17:57.631 "name": "pt1", 00:17:57.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.631 "is_configured": true, 00:17:57.631 "data_offset": 256, 00:17:57.631 "data_size": 7936 00:17:57.631 }, 00:17:57.631 { 00:17:57.631 "name": "pt2", 00:17:57.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.631 "is_configured": true, 00:17:57.631 "data_offset": 256, 00:17:57.631 "data_size": 7936 00:17:57.631 } 00:17:57.631 ] 00:17:57.631 }' 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.631 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.201 [2024-10-21 10:02:34.599947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.201 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.201 "name": "raid_bdev1", 00:17:58.201 "aliases": [ 00:17:58.201 "02db8077-a0fd-461b-9659-eb1c5d509554" 00:17:58.201 ], 00:17:58.201 "product_name": "Raid Volume", 00:17:58.201 "block_size": 4096, 00:17:58.201 "num_blocks": 7936, 00:17:58.201 "uuid": "02db8077-a0fd-461b-9659-eb1c5d509554", 00:17:58.201 "assigned_rate_limits": { 00:17:58.201 "rw_ios_per_sec": 0, 00:17:58.201 "rw_mbytes_per_sec": 0, 00:17:58.201 "r_mbytes_per_sec": 0, 00:17:58.201 "w_mbytes_per_sec": 0 00:17:58.201 }, 00:17:58.201 "claimed": false, 00:17:58.201 "zoned": false, 00:17:58.201 "supported_io_types": { 00:17:58.201 "read": true, 00:17:58.201 "write": true, 00:17:58.201 "unmap": false, 00:17:58.201 "flush": false, 00:17:58.201 "reset": true, 00:17:58.201 "nvme_admin": false, 00:17:58.201 "nvme_io": false, 00:17:58.201 "nvme_io_md": false, 00:17:58.202 "write_zeroes": true, 00:17:58.202 "zcopy": false, 00:17:58.202 "get_zone_info": false, 00:17:58.202 "zone_management": false, 00:17:58.202 "zone_append": false, 00:17:58.202 "compare": false, 00:17:58.202 "compare_and_write": false, 00:17:58.202 "abort": false, 00:17:58.202 "seek_hole": false, 00:17:58.202 "seek_data": false, 00:17:58.202 "copy": false, 00:17:58.202 "nvme_iov_md": false 00:17:58.202 }, 00:17:58.202 "memory_domains": [ 00:17:58.202 { 00:17:58.202 "dma_device_id": "system", 00:17:58.202 "dma_device_type": 1 00:17:58.202 }, 00:17:58.202 { 00:17:58.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.202 "dma_device_type": 2 00:17:58.202 }, 00:17:58.202 { 00:17:58.202 "dma_device_id": "system", 00:17:58.202 "dma_device_type": 1 00:17:58.202 }, 00:17:58.202 { 00:17:58.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.202 "dma_device_type": 2 00:17:58.202 } 00:17:58.202 ], 00:17:58.202 "driver_specific": { 00:17:58.202 "raid": { 00:17:58.202 "uuid": "02db8077-a0fd-461b-9659-eb1c5d509554", 00:17:58.202 "strip_size_kb": 0, 00:17:58.202 "state": "online", 00:17:58.202 "raid_level": "raid1", 00:17:58.202 "superblock": true, 00:17:58.202 "num_base_bdevs": 2, 00:17:58.202 "num_base_bdevs_discovered": 2, 00:17:58.202 "num_base_bdevs_operational": 2, 00:17:58.202 "base_bdevs_list": [ 00:17:58.202 { 00:17:58.202 "name": "pt1", 00:17:58.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.202 "is_configured": true, 00:17:58.202 "data_offset": 256, 00:17:58.202 "data_size": 7936 00:17:58.202 }, 00:17:58.202 { 00:17:58.202 "name": "pt2", 00:17:58.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.202 "is_configured": true, 00:17:58.202 "data_offset": 256, 00:17:58.202 "data_size": 7936 00:17:58.202 } 00:17:58.202 ] 00:17:58.202 } 00:17:58.202 } 00:17:58.202 }' 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:58.202 pt2' 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.202 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.463 [2024-10-21 10:02:34.831479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=02db8077-a0fd-461b-9659-eb1c5d509554 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 02db8077-a0fd-461b-9659-eb1c5d509554 ']' 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.463 [2024-10-21 10:02:34.859138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.463 [2024-10-21 10:02:34.859167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.463 [2024-10-21 10:02:34.859272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.463 [2024-10-21 10:02:34.859347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.463 [2024-10-21 10:02:34.859363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.463 [2024-10-21 10:02:34.962984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:58.463 [2024-10-21 10:02:34.965466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:58.463 [2024-10-21 10:02:34.965624] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:58.463 [2024-10-21 10:02:34.965734] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:58.463 [2024-10-21 10:02:34.965824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.463 [2024-10-21 10:02:34.965878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:17:58.463 request: 00:17:58.463 { 00:17:58.463 "name": "raid_bdev1", 00:17:58.463 "raid_level": "raid1", 00:17:58.463 "base_bdevs": [ 00:17:58.463 "malloc1", 00:17:58.463 "malloc2" 00:17:58.463 ], 00:17:58.463 "superblock": false, 00:17:58.463 "method": "bdev_raid_create", 00:17:58.463 "req_id": 1 00:17:58.463 } 00:17:58.463 Got JSON-RPC error response 00:17:58.463 response: 00:17:58.463 { 00:17:58.463 "code": -17, 00:17:58.463 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:58.463 } 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.463 10:02:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.463 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:58.463 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:58.463 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.463 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.463 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.463 [2024-10-21 10:02:35.026837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.463 [2024-10-21 10:02:35.026960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.463 [2024-10-21 10:02:35.026999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:17:58.463 [2024-10-21 10:02:35.027048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.463 [2024-10-21 10:02:35.030025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.463 [2024-10-21 10:02:35.030114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.463 [2024-10-21 10:02:35.030252] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:58.463 [2024-10-21 10:02:35.030358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.463 pt1 00:17:58.463 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.464 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.724 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.724 "name": "raid_bdev1", 00:17:58.724 "uuid": "02db8077-a0fd-461b-9659-eb1c5d509554", 00:17:58.724 "strip_size_kb": 0, 00:17:58.724 "state": "configuring", 00:17:58.724 "raid_level": "raid1", 00:17:58.724 "superblock": true, 00:17:58.724 "num_base_bdevs": 2, 00:17:58.724 "num_base_bdevs_discovered": 1, 00:17:58.724 "num_base_bdevs_operational": 2, 00:17:58.724 "base_bdevs_list": [ 00:17:58.724 { 00:17:58.724 "name": "pt1", 00:17:58.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.724 "is_configured": true, 00:17:58.724 "data_offset": 256, 00:17:58.724 "data_size": 7936 00:17:58.724 }, 00:17:58.724 { 00:17:58.724 "name": null, 00:17:58.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.724 "is_configured": false, 00:17:58.724 "data_offset": 256, 00:17:58.724 "data_size": 7936 00:17:58.724 } 00:17:58.724 ] 00:17:58.724 }' 00:17:58.724 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.724 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.984 [2024-10-21 10:02:35.478144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.984 [2024-10-21 10:02:35.478295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.984 [2024-10-21 10:02:35.478349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:58.984 [2024-10-21 10:02:35.478388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.984 [2024-10-21 10:02:35.479084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.984 [2024-10-21 10:02:35.479169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.984 [2024-10-21 10:02:35.479312] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.984 [2024-10-21 10:02:35.479378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.984 [2024-10-21 10:02:35.479588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:58.984 [2024-10-21 10:02:35.479639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.984 [2024-10-21 10:02:35.479965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:58.984 [2024-10-21 10:02:35.480204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:58.984 [2024-10-21 10:02:35.480254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:58.984 [2024-10-21 10:02:35.480483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.984 pt2 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.984 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.984 "name": "raid_bdev1", 00:17:58.984 "uuid": "02db8077-a0fd-461b-9659-eb1c5d509554", 00:17:58.984 "strip_size_kb": 0, 00:17:58.984 "state": "online", 00:17:58.984 "raid_level": "raid1", 00:17:58.984 "superblock": true, 00:17:58.984 "num_base_bdevs": 2, 00:17:58.984 "num_base_bdevs_discovered": 2, 00:17:58.984 "num_base_bdevs_operational": 2, 00:17:58.984 "base_bdevs_list": [ 00:17:58.984 { 00:17:58.984 "name": "pt1", 00:17:58.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.984 "is_configured": true, 00:17:58.984 "data_offset": 256, 00:17:58.984 "data_size": 7936 00:17:58.984 }, 00:17:58.984 { 00:17:58.985 "name": "pt2", 00:17:58.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.985 "is_configured": true, 00:17:58.985 "data_offset": 256, 00:17:58.985 "data_size": 7936 00:17:58.985 } 00:17:58.985 ] 00:17:58.985 }' 00:17:58.985 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.985 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.562 [2024-10-21 10:02:35.965693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.562 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.562 "name": "raid_bdev1", 00:17:59.562 "aliases": [ 00:17:59.562 "02db8077-a0fd-461b-9659-eb1c5d509554" 00:17:59.562 ], 00:17:59.562 "product_name": "Raid Volume", 00:17:59.562 "block_size": 4096, 00:17:59.562 "num_blocks": 7936, 00:17:59.562 "uuid": "02db8077-a0fd-461b-9659-eb1c5d509554", 00:17:59.562 "assigned_rate_limits": { 00:17:59.562 "rw_ios_per_sec": 0, 00:17:59.562 "rw_mbytes_per_sec": 0, 00:17:59.562 "r_mbytes_per_sec": 0, 00:17:59.562 "w_mbytes_per_sec": 0 00:17:59.562 }, 00:17:59.562 "claimed": false, 00:17:59.562 "zoned": false, 00:17:59.562 "supported_io_types": { 00:17:59.562 "read": true, 00:17:59.562 "write": true, 00:17:59.562 "unmap": false, 00:17:59.562 "flush": false, 00:17:59.562 "reset": true, 00:17:59.562 "nvme_admin": false, 00:17:59.562 "nvme_io": false, 00:17:59.562 "nvme_io_md": false, 00:17:59.562 "write_zeroes": true, 00:17:59.562 "zcopy": false, 00:17:59.562 "get_zone_info": false, 00:17:59.562 "zone_management": false, 00:17:59.562 "zone_append": false, 00:17:59.562 "compare": false, 00:17:59.562 "compare_and_write": false, 00:17:59.562 "abort": false, 00:17:59.562 "seek_hole": false, 00:17:59.562 "seek_data": false, 00:17:59.562 "copy": false, 00:17:59.562 "nvme_iov_md": false 00:17:59.562 }, 00:17:59.562 "memory_domains": [ 00:17:59.562 { 00:17:59.562 "dma_device_id": "system", 00:17:59.562 "dma_device_type": 1 00:17:59.562 }, 00:17:59.562 { 00:17:59.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.562 "dma_device_type": 2 00:17:59.562 }, 00:17:59.562 { 00:17:59.562 "dma_device_id": "system", 00:17:59.562 "dma_device_type": 1 00:17:59.562 }, 00:17:59.562 { 00:17:59.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.562 "dma_device_type": 2 00:17:59.562 } 00:17:59.562 ], 00:17:59.562 "driver_specific": { 00:17:59.562 "raid": { 00:17:59.562 "uuid": "02db8077-a0fd-461b-9659-eb1c5d509554", 00:17:59.562 "strip_size_kb": 0, 00:17:59.562 "state": "online", 00:17:59.562 "raid_level": "raid1", 00:17:59.562 "superblock": true, 00:17:59.562 "num_base_bdevs": 2, 00:17:59.562 "num_base_bdevs_discovered": 2, 00:17:59.562 "num_base_bdevs_operational": 2, 00:17:59.562 "base_bdevs_list": [ 00:17:59.562 { 00:17:59.562 "name": "pt1", 00:17:59.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.562 "is_configured": true, 00:17:59.563 "data_offset": 256, 00:17:59.563 "data_size": 7936 00:17:59.563 }, 00:17:59.563 { 00:17:59.563 "name": "pt2", 00:17:59.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.563 "is_configured": true, 00:17:59.563 "data_offset": 256, 00:17:59.563 "data_size": 7936 00:17:59.563 } 00:17:59.563 ] 00:17:59.563 } 00:17:59.563 } 00:17:59.563 }' 00:17:59.563 10:02:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:59.563 pt2' 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.563 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.823 [2024-10-21 10:02:36.185281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 02db8077-a0fd-461b-9659-eb1c5d509554 '!=' 02db8077-a0fd-461b-9659-eb1c5d509554 ']' 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.823 [2024-10-21 10:02:36.213002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.823 "name": "raid_bdev1", 00:17:59.823 "uuid": "02db8077-a0fd-461b-9659-eb1c5d509554", 00:17:59.823 "strip_size_kb": 0, 00:17:59.823 "state": "online", 00:17:59.823 "raid_level": "raid1", 00:17:59.823 "superblock": true, 00:17:59.823 "num_base_bdevs": 2, 00:17:59.823 "num_base_bdevs_discovered": 1, 00:17:59.823 "num_base_bdevs_operational": 1, 00:17:59.823 "base_bdevs_list": [ 00:17:59.823 { 00:17:59.823 "name": null, 00:17:59.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.823 "is_configured": false, 00:17:59.823 "data_offset": 0, 00:17:59.823 "data_size": 7936 00:17:59.823 }, 00:17:59.823 { 00:17:59.823 "name": "pt2", 00:17:59.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.823 "is_configured": true, 00:17:59.823 "data_offset": 256, 00:17:59.823 "data_size": 7936 00:17:59.823 } 00:17:59.823 ] 00:17:59.823 }' 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.823 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.084 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.084 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.084 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.084 [2024-10-21 10:02:36.664244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.084 [2024-10-21 10:02:36.664341] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.084 [2024-10-21 10:02:36.664484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.084 [2024-10-21 10:02:36.664585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.084 [2024-10-21 10:02:36.664643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:00.084 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.084 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.084 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.084 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.084 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.345 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.345 [2024-10-21 10:02:36.724100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.345 [2024-10-21 10:02:36.724230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.345 [2024-10-21 10:02:36.724259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:00.345 [2024-10-21 10:02:36.724274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.345 [2024-10-21 10:02:36.727303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.345 pt2 00:18:00.345 [2024-10-21 10:02:36.727401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.345 [2024-10-21 10:02:36.727514] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:00.345 [2024-10-21 10:02:36.727602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.345 [2024-10-21 10:02:36.727759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:18:00.346 [2024-10-21 10:02:36.727776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:00.346 [2024-10-21 10:02:36.728057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:00.346 [2024-10-21 10:02:36.728252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:18:00.346 [2024-10-21 10:02:36.728262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:18:00.346 [2024-10-21 10:02:36.728457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.346 "name": "raid_bdev1", 00:18:00.346 "uuid": "02db8077-a0fd-461b-9659-eb1c5d509554", 00:18:00.346 "strip_size_kb": 0, 00:18:00.346 "state": "online", 00:18:00.346 "raid_level": "raid1", 00:18:00.346 "superblock": true, 00:18:00.346 "num_base_bdevs": 2, 00:18:00.346 "num_base_bdevs_discovered": 1, 00:18:00.346 "num_base_bdevs_operational": 1, 00:18:00.346 "base_bdevs_list": [ 00:18:00.346 { 00:18:00.346 "name": null, 00:18:00.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.346 "is_configured": false, 00:18:00.346 "data_offset": 256, 00:18:00.346 "data_size": 7936 00:18:00.346 }, 00:18:00.346 { 00:18:00.346 "name": "pt2", 00:18:00.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.346 "is_configured": true, 00:18:00.346 "data_offset": 256, 00:18:00.346 "data_size": 7936 00:18:00.346 } 00:18:00.346 ] 00:18:00.346 }' 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.346 10:02:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.606 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.606 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 [2024-10-21 10:02:37.191668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.606 [2024-10-21 10:02:37.191752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.606 [2024-10-21 10:02:37.191891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.606 [2024-10-21 10:02:37.191986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.606 [2024-10-21 10:02:37.192037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:18:00.606 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.867 [2024-10-21 10:02:37.255554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.867 [2024-10-21 10:02:37.255709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.867 [2024-10-21 10:02:37.255764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:00.867 [2024-10-21 10:02:37.255821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.867 [2024-10-21 10:02:37.258738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.867 [2024-10-21 10:02:37.258811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.867 [2024-10-21 10:02:37.258960] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:00.867 [2024-10-21 10:02:37.259069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.867 [2024-10-21 10:02:37.259284] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:00.867 [2024-10-21 10:02:37.259347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.867 [2024-10-21 10:02:37.259392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state configuring 00:18:00.867 [2024-10-21 10:02:37.259537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.867 [2024-10-21 10:02:37.259708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:18:00.867 [2024-10-21 10:02:37.259753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:00.867 [2024-10-21 10:02:37.260066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:00.867 [2024-10-21 10:02:37.260323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:18:00.867 [2024-10-21 10:02:37.260378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:18:00.867 [2024-10-21 10:02:37.260670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.867 pt1 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.867 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.868 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.868 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.868 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.868 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.868 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.868 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.868 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.868 "name": "raid_bdev1", 00:18:00.868 "uuid": "02db8077-a0fd-461b-9659-eb1c5d509554", 00:18:00.868 "strip_size_kb": 0, 00:18:00.868 "state": "online", 00:18:00.868 "raid_level": "raid1", 00:18:00.868 "superblock": true, 00:18:00.868 "num_base_bdevs": 2, 00:18:00.868 "num_base_bdevs_discovered": 1, 00:18:00.868 "num_base_bdevs_operational": 1, 00:18:00.868 "base_bdevs_list": [ 00:18:00.868 { 00:18:00.868 "name": null, 00:18:00.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.868 "is_configured": false, 00:18:00.868 "data_offset": 256, 00:18:00.868 "data_size": 7936 00:18:00.868 }, 00:18:00.868 { 00:18:00.868 "name": "pt2", 00:18:00.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.868 "is_configured": true, 00:18:00.868 "data_offset": 256, 00:18:00.868 "data_size": 7936 00:18:00.868 } 00:18:00.868 ] 00:18:00.868 }' 00:18:00.868 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.868 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.128 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:01.128 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.128 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.128 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:01.128 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.128 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:01.128 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:01.128 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.128 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.128 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.128 [2024-10-21 10:02:37.715388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 02db8077-a0fd-461b-9659-eb1c5d509554 '!=' 02db8077-a0fd-461b-9659-eb1c5d509554 ']' 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85860 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 85860 ']' 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 85860 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85860 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:01.388 killing process with pid 85860 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85860' 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 85860 00:18:01.388 10:02:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 85860 00:18:01.388 [2024-10-21 10:02:37.792485] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.388 [2024-10-21 10:02:37.792619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.388 [2024-10-21 10:02:37.792690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.388 [2024-10-21 10:02:37.792708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:18:01.648 [2024-10-21 10:02:38.066520] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.029 10:02:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:03.029 ************************************ 00:18:03.029 END TEST raid_superblock_test_4k 00:18:03.029 ************************************ 00:18:03.029 00:18:03.029 real 0m6.513s 00:18:03.029 user 0m9.494s 00:18:03.029 sys 0m1.225s 00:18:03.029 10:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.029 10:02:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.029 10:02:39 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:03.029 10:02:39 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:03.029 10:02:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:03.029 10:02:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.029 10:02:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.029 ************************************ 00:18:03.029 START TEST raid_rebuild_test_sb_4k 00:18:03.029 ************************************ 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:03.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86183 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86183 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86183 ']' 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.029 10:02:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:03.290 [2024-10-21 10:02:39.712098] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:18:03.290 [2024-10-21 10:02:39.712332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86183 ] 00:18:03.290 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:03.290 Zero copy mechanism will not be used. 00:18:03.290 [2024-10-21 10:02:39.884635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.549 [2024-10-21 10:02:40.047259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.809 [2024-10-21 10:02:40.348597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.809 [2024-10-21 10:02:40.348803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.069 BaseBdev1_malloc 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.069 [2024-10-21 10:02:40.654308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:04.069 [2024-10-21 10:02:40.654469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.069 [2024-10-21 10:02:40.654509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:04.069 [2024-10-21 10:02:40.654525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.069 [2024-10-21 10:02:40.657419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.069 [2024-10-21 10:02:40.657469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:04.069 BaseBdev1 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.069 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.329 BaseBdev2_malloc 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.329 [2024-10-21 10:02:40.717633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:04.329 [2024-10-21 10:02:40.717703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.329 [2024-10-21 10:02:40.717727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:18:04.329 [2024-10-21 10:02:40.717742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.329 [2024-10-21 10:02:40.720504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.329 [2024-10-21 10:02:40.720626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:04.329 BaseBdev2 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.329 spare_malloc 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.329 spare_delay 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.329 [2024-10-21 10:02:40.812827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:04.329 [2024-10-21 10:02:40.812895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.329 [2024-10-21 10:02:40.812919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:04.329 [2024-10-21 10:02:40.812933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.329 [2024-10-21 10:02:40.815761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.329 [2024-10-21 10:02:40.815866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:04.329 spare 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.329 [2024-10-21 10:02:40.824853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.329 [2024-10-21 10:02:40.827345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.329 [2024-10-21 10:02:40.827663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:18:04.329 [2024-10-21 10:02:40.827689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:04.329 [2024-10-21 10:02:40.828016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:04.329 [2024-10-21 10:02:40.828216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:18:04.329 [2024-10-21 10:02:40.828228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:18:04.329 [2024-10-21 10:02:40.828430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.329 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.329 "name": "raid_bdev1", 00:18:04.329 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:04.329 "strip_size_kb": 0, 00:18:04.329 "state": "online", 00:18:04.330 "raid_level": "raid1", 00:18:04.330 "superblock": true, 00:18:04.330 "num_base_bdevs": 2, 00:18:04.330 "num_base_bdevs_discovered": 2, 00:18:04.330 "num_base_bdevs_operational": 2, 00:18:04.330 "base_bdevs_list": [ 00:18:04.330 { 00:18:04.330 "name": "BaseBdev1", 00:18:04.330 "uuid": "20a43833-7380-5148-9abd-aa2e402e368e", 00:18:04.330 "is_configured": true, 00:18:04.330 "data_offset": 256, 00:18:04.330 "data_size": 7936 00:18:04.330 }, 00:18:04.330 { 00:18:04.330 "name": "BaseBdev2", 00:18:04.330 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:04.330 "is_configured": true, 00:18:04.330 "data_offset": 256, 00:18:04.330 "data_size": 7936 00:18:04.330 } 00:18:04.330 ] 00:18:04.330 }' 00:18:04.330 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.330 10:02:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.900 [2024-10-21 10:02:41.308442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:04.900 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:05.160 [2024-10-21 10:02:41.599712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:05.160 /dev/nbd0 00:18:05.160 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:05.160 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:05.160 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:05.160 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:05.160 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:05.160 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:05.160 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.161 1+0 records in 00:18:05.161 1+0 records out 00:18:05.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523243 s, 7.8 MB/s 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:05.161 10:02:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:06.100 7936+0 records in 00:18:06.100 7936+0 records out 00:18:06.100 32505856 bytes (33 MB, 31 MiB) copied, 0.754196 s, 43.1 MB/s 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:06.100 [2024-10-21 10:02:42.650021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.100 [2024-10-21 10:02:42.666110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.100 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.360 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.360 "name": "raid_bdev1", 00:18:06.360 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:06.360 "strip_size_kb": 0, 00:18:06.360 "state": "online", 00:18:06.360 "raid_level": "raid1", 00:18:06.360 "superblock": true, 00:18:06.360 "num_base_bdevs": 2, 00:18:06.360 "num_base_bdevs_discovered": 1, 00:18:06.360 "num_base_bdevs_operational": 1, 00:18:06.360 "base_bdevs_list": [ 00:18:06.360 { 00:18:06.360 "name": null, 00:18:06.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.360 "is_configured": false, 00:18:06.360 "data_offset": 0, 00:18:06.360 "data_size": 7936 00:18:06.360 }, 00:18:06.360 { 00:18:06.360 "name": "BaseBdev2", 00:18:06.360 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:06.360 "is_configured": true, 00:18:06.360 "data_offset": 256, 00:18:06.360 "data_size": 7936 00:18:06.360 } 00:18:06.360 ] 00:18:06.360 }' 00:18:06.360 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.360 10:02:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.620 10:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.620 10:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.620 10:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.620 [2024-10-21 10:02:43.149375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.620 [2024-10-21 10:02:43.174742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018cff0 00:18:06.620 10:02:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.620 10:02:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:06.620 [2024-10-21 10:02:43.177292] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.000 "name": "raid_bdev1", 00:18:08.000 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:08.000 "strip_size_kb": 0, 00:18:08.000 "state": "online", 00:18:08.000 "raid_level": "raid1", 00:18:08.000 "superblock": true, 00:18:08.000 "num_base_bdevs": 2, 00:18:08.000 "num_base_bdevs_discovered": 2, 00:18:08.000 "num_base_bdevs_operational": 2, 00:18:08.000 "process": { 00:18:08.000 "type": "rebuild", 00:18:08.000 "target": "spare", 00:18:08.000 "progress": { 00:18:08.000 "blocks": 2560, 00:18:08.000 "percent": 32 00:18:08.000 } 00:18:08.000 }, 00:18:08.000 "base_bdevs_list": [ 00:18:08.000 { 00:18:08.000 "name": "spare", 00:18:08.000 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:08.000 "is_configured": true, 00:18:08.000 "data_offset": 256, 00:18:08.000 "data_size": 7936 00:18:08.000 }, 00:18:08.000 { 00:18:08.000 "name": "BaseBdev2", 00:18:08.000 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:08.000 "is_configured": true, 00:18:08.000 "data_offset": 256, 00:18:08.000 "data_size": 7936 00:18:08.000 } 00:18:08.000 ] 00:18:08.000 }' 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.000 [2024-10-21 10:02:44.340918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.000 [2024-10-21 10:02:44.387820] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:08.000 [2024-10-21 10:02:44.387971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.000 [2024-10-21 10:02:44.387992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.000 [2024-10-21 10:02:44.388011] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.000 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.000 "name": "raid_bdev1", 00:18:08.000 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:08.000 "strip_size_kb": 0, 00:18:08.000 "state": "online", 00:18:08.000 "raid_level": "raid1", 00:18:08.000 "superblock": true, 00:18:08.000 "num_base_bdevs": 2, 00:18:08.000 "num_base_bdevs_discovered": 1, 00:18:08.000 "num_base_bdevs_operational": 1, 00:18:08.000 "base_bdevs_list": [ 00:18:08.000 { 00:18:08.001 "name": null, 00:18:08.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.001 "is_configured": false, 00:18:08.001 "data_offset": 0, 00:18:08.001 "data_size": 7936 00:18:08.001 }, 00:18:08.001 { 00:18:08.001 "name": "BaseBdev2", 00:18:08.001 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:08.001 "is_configured": true, 00:18:08.001 "data_offset": 256, 00:18:08.001 "data_size": 7936 00:18:08.001 } 00:18:08.001 ] 00:18:08.001 }' 00:18:08.001 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.001 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.570 "name": "raid_bdev1", 00:18:08.570 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:08.570 "strip_size_kb": 0, 00:18:08.570 "state": "online", 00:18:08.570 "raid_level": "raid1", 00:18:08.570 "superblock": true, 00:18:08.570 "num_base_bdevs": 2, 00:18:08.570 "num_base_bdevs_discovered": 1, 00:18:08.570 "num_base_bdevs_operational": 1, 00:18:08.570 "base_bdevs_list": [ 00:18:08.570 { 00:18:08.570 "name": null, 00:18:08.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.570 "is_configured": false, 00:18:08.570 "data_offset": 0, 00:18:08.570 "data_size": 7936 00:18:08.570 }, 00:18:08.570 { 00:18:08.570 "name": "BaseBdev2", 00:18:08.570 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:08.570 "is_configured": true, 00:18:08.570 "data_offset": 256, 00:18:08.570 "data_size": 7936 00:18:08.570 } 00:18:08.570 ] 00:18:08.570 }' 00:18:08.570 10:02:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.570 10:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.570 10:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.570 10:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.570 10:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:08.570 10:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.570 10:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.570 [2024-10-21 10:02:45.066810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.570 [2024-10-21 10:02:45.090028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:18:08.570 10:02:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.570 10:02:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:08.570 [2024-10-21 10:02:45.092695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.506 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.506 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.506 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.506 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.506 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.764 "name": "raid_bdev1", 00:18:09.764 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:09.764 "strip_size_kb": 0, 00:18:09.764 "state": "online", 00:18:09.764 "raid_level": "raid1", 00:18:09.764 "superblock": true, 00:18:09.764 "num_base_bdevs": 2, 00:18:09.764 "num_base_bdevs_discovered": 2, 00:18:09.764 "num_base_bdevs_operational": 2, 00:18:09.764 "process": { 00:18:09.764 "type": "rebuild", 00:18:09.764 "target": "spare", 00:18:09.764 "progress": { 00:18:09.764 "blocks": 2560, 00:18:09.764 "percent": 32 00:18:09.764 } 00:18:09.764 }, 00:18:09.764 "base_bdevs_list": [ 00:18:09.764 { 00:18:09.764 "name": "spare", 00:18:09.764 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:09.764 "is_configured": true, 00:18:09.764 "data_offset": 256, 00:18:09.764 "data_size": 7936 00:18:09.764 }, 00:18:09.764 { 00:18:09.764 "name": "BaseBdev2", 00:18:09.764 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:09.764 "is_configured": true, 00:18:09.764 "data_offset": 256, 00:18:09.764 "data_size": 7936 00:18:09.764 } 00:18:09.764 ] 00:18:09.764 }' 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:09.764 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=693 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.764 "name": "raid_bdev1", 00:18:09.764 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:09.764 "strip_size_kb": 0, 00:18:09.764 "state": "online", 00:18:09.764 "raid_level": "raid1", 00:18:09.764 "superblock": true, 00:18:09.764 "num_base_bdevs": 2, 00:18:09.764 "num_base_bdevs_discovered": 2, 00:18:09.764 "num_base_bdevs_operational": 2, 00:18:09.764 "process": { 00:18:09.764 "type": "rebuild", 00:18:09.764 "target": "spare", 00:18:09.764 "progress": { 00:18:09.764 "blocks": 2816, 00:18:09.764 "percent": 35 00:18:09.764 } 00:18:09.764 }, 00:18:09.764 "base_bdevs_list": [ 00:18:09.764 { 00:18:09.764 "name": "spare", 00:18:09.764 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:09.764 "is_configured": true, 00:18:09.764 "data_offset": 256, 00:18:09.764 "data_size": 7936 00:18:09.764 }, 00:18:09.764 { 00:18:09.764 "name": "BaseBdev2", 00:18:09.764 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:09.764 "is_configured": true, 00:18:09.764 "data_offset": 256, 00:18:09.764 "data_size": 7936 00:18:09.764 } 00:18:09.764 ] 00:18:09.764 }' 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.764 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.081 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.081 10:02:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.031 "name": "raid_bdev1", 00:18:11.031 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:11.031 "strip_size_kb": 0, 00:18:11.031 "state": "online", 00:18:11.031 "raid_level": "raid1", 00:18:11.031 "superblock": true, 00:18:11.031 "num_base_bdevs": 2, 00:18:11.031 "num_base_bdevs_discovered": 2, 00:18:11.031 "num_base_bdevs_operational": 2, 00:18:11.031 "process": { 00:18:11.031 "type": "rebuild", 00:18:11.031 "target": "spare", 00:18:11.031 "progress": { 00:18:11.031 "blocks": 5632, 00:18:11.031 "percent": 70 00:18:11.031 } 00:18:11.031 }, 00:18:11.031 "base_bdevs_list": [ 00:18:11.031 { 00:18:11.031 "name": "spare", 00:18:11.031 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:11.031 "is_configured": true, 00:18:11.031 "data_offset": 256, 00:18:11.031 "data_size": 7936 00:18:11.031 }, 00:18:11.031 { 00:18:11.031 "name": "BaseBdev2", 00:18:11.031 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:11.031 "is_configured": true, 00:18:11.031 "data_offset": 256, 00:18:11.031 "data_size": 7936 00:18:11.031 } 00:18:11.031 ] 00:18:11.031 }' 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.031 10:02:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.966 [2024-10-21 10:02:48.218752] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:11.966 [2024-10-21 10:02:48.218977] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:11.966 [2024-10-21 10:02:48.219160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.966 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.966 "name": "raid_bdev1", 00:18:11.966 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:11.966 "strip_size_kb": 0, 00:18:11.966 "state": "online", 00:18:11.966 "raid_level": "raid1", 00:18:11.966 "superblock": true, 00:18:11.966 "num_base_bdevs": 2, 00:18:11.966 "num_base_bdevs_discovered": 2, 00:18:11.966 "num_base_bdevs_operational": 2, 00:18:11.966 "base_bdevs_list": [ 00:18:11.966 { 00:18:11.966 "name": "spare", 00:18:11.966 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:11.966 "is_configured": true, 00:18:11.966 "data_offset": 256, 00:18:11.966 "data_size": 7936 00:18:11.966 }, 00:18:11.966 { 00:18:11.966 "name": "BaseBdev2", 00:18:11.966 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:11.967 "is_configured": true, 00:18:11.967 "data_offset": 256, 00:18:11.967 "data_size": 7936 00:18:11.967 } 00:18:11.967 ] 00:18:11.967 }' 00:18:11.967 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.227 "name": "raid_bdev1", 00:18:12.227 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:12.227 "strip_size_kb": 0, 00:18:12.227 "state": "online", 00:18:12.227 "raid_level": "raid1", 00:18:12.227 "superblock": true, 00:18:12.227 "num_base_bdevs": 2, 00:18:12.227 "num_base_bdevs_discovered": 2, 00:18:12.227 "num_base_bdevs_operational": 2, 00:18:12.227 "base_bdevs_list": [ 00:18:12.227 { 00:18:12.227 "name": "spare", 00:18:12.227 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:12.227 "is_configured": true, 00:18:12.227 "data_offset": 256, 00:18:12.227 "data_size": 7936 00:18:12.227 }, 00:18:12.227 { 00:18:12.227 "name": "BaseBdev2", 00:18:12.227 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:12.227 "is_configured": true, 00:18:12.227 "data_offset": 256, 00:18:12.227 "data_size": 7936 00:18:12.227 } 00:18:12.227 ] 00:18:12.227 }' 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.227 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.485 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.485 "name": "raid_bdev1", 00:18:12.485 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:12.485 "strip_size_kb": 0, 00:18:12.485 "state": "online", 00:18:12.485 "raid_level": "raid1", 00:18:12.485 "superblock": true, 00:18:12.485 "num_base_bdevs": 2, 00:18:12.485 "num_base_bdevs_discovered": 2, 00:18:12.485 "num_base_bdevs_operational": 2, 00:18:12.485 "base_bdevs_list": [ 00:18:12.485 { 00:18:12.485 "name": "spare", 00:18:12.485 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:12.485 "is_configured": true, 00:18:12.485 "data_offset": 256, 00:18:12.485 "data_size": 7936 00:18:12.485 }, 00:18:12.485 { 00:18:12.485 "name": "BaseBdev2", 00:18:12.485 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:12.485 "is_configured": true, 00:18:12.485 "data_offset": 256, 00:18:12.485 "data_size": 7936 00:18:12.485 } 00:18:12.485 ] 00:18:12.485 }' 00:18:12.485 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.485 10:02:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.744 [2024-10-21 10:02:49.219284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.744 [2024-10-21 10:02:49.219379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.744 [2024-10-21 10:02:49.219528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.744 [2024-10-21 10:02:49.219661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.744 [2024-10-21 10:02:49.219721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:12.744 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:13.004 /dev/nbd0 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.004 1+0 records in 00:18:13.004 1+0 records out 00:18:13.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325359 s, 12.6 MB/s 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:13.004 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:13.264 /dev/nbd1 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:13.264 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.264 1+0 records in 00:18:13.264 1+0 records out 00:18:13.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574775 s, 7.1 MB/s 00:18:13.265 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.265 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:13.265 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.265 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:13.265 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:13.265 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.265 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:13.265 10:02:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:13.524 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:13.524 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:13.524 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:13.524 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:13.524 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:13.524 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:13.524 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:13.784 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:13.784 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:13.784 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:13.784 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:13.784 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:13.784 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:13.784 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:13.784 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:13.784 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:13.784 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.044 [2024-10-21 10:02:50.522946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:14.044 [2024-10-21 10:02:50.523022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.044 [2024-10-21 10:02:50.523072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:14.044 [2024-10-21 10:02:50.523083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.044 [2024-10-21 10:02:50.526084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.044 [2024-10-21 10:02:50.526128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:14.044 [2024-10-21 10:02:50.526242] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:14.044 [2024-10-21 10:02:50.526311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.044 [2024-10-21 10:02:50.526509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.044 spare 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.044 [2024-10-21 10:02:50.626453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:18:14.044 [2024-10-21 10:02:50.626494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:14.044 [2024-10-21 10:02:50.626879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c18e0 00:18:14.044 [2024-10-21 10:02:50.627132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:18:14.044 [2024-10-21 10:02:50.627154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005f00 00:18:14.044 [2024-10-21 10:02:50.627389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.044 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.304 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.304 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.304 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.304 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.304 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.304 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.304 "name": "raid_bdev1", 00:18:14.304 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:14.304 "strip_size_kb": 0, 00:18:14.304 "state": "online", 00:18:14.304 "raid_level": "raid1", 00:18:14.304 "superblock": true, 00:18:14.304 "num_base_bdevs": 2, 00:18:14.304 "num_base_bdevs_discovered": 2, 00:18:14.304 "num_base_bdevs_operational": 2, 00:18:14.304 "base_bdevs_list": [ 00:18:14.304 { 00:18:14.304 "name": "spare", 00:18:14.304 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:14.304 "is_configured": true, 00:18:14.304 "data_offset": 256, 00:18:14.304 "data_size": 7936 00:18:14.304 }, 00:18:14.304 { 00:18:14.304 "name": "BaseBdev2", 00:18:14.304 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:14.304 "is_configured": true, 00:18:14.304 "data_offset": 256, 00:18:14.304 "data_size": 7936 00:18:14.304 } 00:18:14.304 ] 00:18:14.304 }' 00:18:14.304 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.304 10:02:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.563 "name": "raid_bdev1", 00:18:14.563 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:14.563 "strip_size_kb": 0, 00:18:14.563 "state": "online", 00:18:14.563 "raid_level": "raid1", 00:18:14.563 "superblock": true, 00:18:14.563 "num_base_bdevs": 2, 00:18:14.563 "num_base_bdevs_discovered": 2, 00:18:14.563 "num_base_bdevs_operational": 2, 00:18:14.563 "base_bdevs_list": [ 00:18:14.563 { 00:18:14.563 "name": "spare", 00:18:14.563 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:14.563 "is_configured": true, 00:18:14.563 "data_offset": 256, 00:18:14.563 "data_size": 7936 00:18:14.563 }, 00:18:14.563 { 00:18:14.563 "name": "BaseBdev2", 00:18:14.563 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:14.563 "is_configured": true, 00:18:14.563 "data_offset": 256, 00:18:14.563 "data_size": 7936 00:18:14.563 } 00:18:14.563 ] 00:18:14.563 }' 00:18:14.563 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.823 [2024-10-21 10:02:51.286363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.823 "name": "raid_bdev1", 00:18:14.823 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:14.823 "strip_size_kb": 0, 00:18:14.823 "state": "online", 00:18:14.823 "raid_level": "raid1", 00:18:14.823 "superblock": true, 00:18:14.823 "num_base_bdevs": 2, 00:18:14.823 "num_base_bdevs_discovered": 1, 00:18:14.823 "num_base_bdevs_operational": 1, 00:18:14.823 "base_bdevs_list": [ 00:18:14.823 { 00:18:14.823 "name": null, 00:18:14.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.823 "is_configured": false, 00:18:14.823 "data_offset": 0, 00:18:14.823 "data_size": 7936 00:18:14.823 }, 00:18:14.823 { 00:18:14.823 "name": "BaseBdev2", 00:18:14.823 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:14.823 "is_configured": true, 00:18:14.823 "data_offset": 256, 00:18:14.823 "data_size": 7936 00:18:14.823 } 00:18:14.823 ] 00:18:14.823 }' 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.823 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.393 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:15.393 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.393 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.393 [2024-10-21 10:02:51.765660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.393 [2024-10-21 10:02:51.765915] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.393 [2024-10-21 10:02:51.765943] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:15.393 [2024-10-21 10:02:51.766002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.393 [2024-10-21 10:02:51.788438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:18:15.393 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.393 10:02:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:15.393 [2024-10-21 10:02:51.791066] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.332 "name": "raid_bdev1", 00:18:16.332 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:16.332 "strip_size_kb": 0, 00:18:16.332 "state": "online", 00:18:16.332 "raid_level": "raid1", 00:18:16.332 "superblock": true, 00:18:16.332 "num_base_bdevs": 2, 00:18:16.332 "num_base_bdevs_discovered": 2, 00:18:16.332 "num_base_bdevs_operational": 2, 00:18:16.332 "process": { 00:18:16.332 "type": "rebuild", 00:18:16.332 "target": "spare", 00:18:16.332 "progress": { 00:18:16.332 "blocks": 2560, 00:18:16.332 "percent": 32 00:18:16.332 } 00:18:16.332 }, 00:18:16.332 "base_bdevs_list": [ 00:18:16.332 { 00:18:16.332 "name": "spare", 00:18:16.332 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:16.332 "is_configured": true, 00:18:16.332 "data_offset": 256, 00:18:16.332 "data_size": 7936 00:18:16.332 }, 00:18:16.332 { 00:18:16.332 "name": "BaseBdev2", 00:18:16.332 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:16.332 "is_configured": true, 00:18:16.332 "data_offset": 256, 00:18:16.332 "data_size": 7936 00:18:16.332 } 00:18:16.332 ] 00:18:16.332 }' 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.332 10:02:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.592 [2024-10-21 10:02:52.927962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.592 [2024-10-21 10:02:53.000652] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:16.592 [2024-10-21 10:02:53.000820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.592 [2024-10-21 10:02:53.000872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.592 [2024-10-21 10:02:53.000903] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.592 "name": "raid_bdev1", 00:18:16.592 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:16.592 "strip_size_kb": 0, 00:18:16.592 "state": "online", 00:18:16.592 "raid_level": "raid1", 00:18:16.592 "superblock": true, 00:18:16.592 "num_base_bdevs": 2, 00:18:16.592 "num_base_bdevs_discovered": 1, 00:18:16.592 "num_base_bdevs_operational": 1, 00:18:16.592 "base_bdevs_list": [ 00:18:16.592 { 00:18:16.592 "name": null, 00:18:16.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.592 "is_configured": false, 00:18:16.592 "data_offset": 0, 00:18:16.592 "data_size": 7936 00:18:16.592 }, 00:18:16.592 { 00:18:16.592 "name": "BaseBdev2", 00:18:16.592 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:16.592 "is_configured": true, 00:18:16.592 "data_offset": 256, 00:18:16.592 "data_size": 7936 00:18:16.592 } 00:18:16.592 ] 00:18:16.592 }' 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.592 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.161 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:17.161 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.161 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.161 [2024-10-21 10:02:53.494687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:17.161 [2024-10-21 10:02:53.494830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.161 [2024-10-21 10:02:53.494881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:17.161 [2024-10-21 10:02:53.494923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.161 [2024-10-21 10:02:53.495652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.161 [2024-10-21 10:02:53.495736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:17.161 [2024-10-21 10:02:53.495892] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:17.161 [2024-10-21 10:02:53.495949] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.161 [2024-10-21 10:02:53.496003] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:17.161 [2024-10-21 10:02:53.496088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.161 [2024-10-21 10:02:53.518590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:18:17.161 spare 00:18:17.161 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.161 10:02:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:17.161 [2024-10-21 10:02:53.521094] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.100 "name": "raid_bdev1", 00:18:18.100 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:18.100 "strip_size_kb": 0, 00:18:18.100 "state": "online", 00:18:18.100 "raid_level": "raid1", 00:18:18.100 "superblock": true, 00:18:18.100 "num_base_bdevs": 2, 00:18:18.100 "num_base_bdevs_discovered": 2, 00:18:18.100 "num_base_bdevs_operational": 2, 00:18:18.100 "process": { 00:18:18.100 "type": "rebuild", 00:18:18.100 "target": "spare", 00:18:18.100 "progress": { 00:18:18.100 "blocks": 2560, 00:18:18.100 "percent": 32 00:18:18.100 } 00:18:18.100 }, 00:18:18.100 "base_bdevs_list": [ 00:18:18.100 { 00:18:18.100 "name": "spare", 00:18:18.100 "uuid": "d1d9976c-5bf9-576f-b076-dfee8876442a", 00:18:18.100 "is_configured": true, 00:18:18.100 "data_offset": 256, 00:18:18.100 "data_size": 7936 00:18:18.100 }, 00:18:18.100 { 00:18:18.100 "name": "BaseBdev2", 00:18:18.100 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:18.100 "is_configured": true, 00:18:18.100 "data_offset": 256, 00:18:18.100 "data_size": 7936 00:18:18.100 } 00:18:18.100 ] 00:18:18.100 }' 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.100 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.100 [2024-10-21 10:02:54.680979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.359 [2024-10-21 10:02:54.730630] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:18.359 [2024-10-21 10:02:54.730770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.359 [2024-10-21 10:02:54.730799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.359 [2024-10-21 10:02:54.730809] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.359 "name": "raid_bdev1", 00:18:18.359 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:18.359 "strip_size_kb": 0, 00:18:18.359 "state": "online", 00:18:18.359 "raid_level": "raid1", 00:18:18.359 "superblock": true, 00:18:18.359 "num_base_bdevs": 2, 00:18:18.359 "num_base_bdevs_discovered": 1, 00:18:18.359 "num_base_bdevs_operational": 1, 00:18:18.359 "base_bdevs_list": [ 00:18:18.359 { 00:18:18.359 "name": null, 00:18:18.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.359 "is_configured": false, 00:18:18.359 "data_offset": 0, 00:18:18.359 "data_size": 7936 00:18:18.359 }, 00:18:18.359 { 00:18:18.359 "name": "BaseBdev2", 00:18:18.359 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:18.359 "is_configured": true, 00:18:18.359 "data_offset": 256, 00:18:18.359 "data_size": 7936 00:18:18.359 } 00:18:18.359 ] 00:18:18.359 }' 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.359 10:02:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.618 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.618 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.618 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.618 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.618 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.618 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.618 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.618 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.618 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.879 "name": "raid_bdev1", 00:18:18.879 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:18.879 "strip_size_kb": 0, 00:18:18.879 "state": "online", 00:18:18.879 "raid_level": "raid1", 00:18:18.879 "superblock": true, 00:18:18.879 "num_base_bdevs": 2, 00:18:18.879 "num_base_bdevs_discovered": 1, 00:18:18.879 "num_base_bdevs_operational": 1, 00:18:18.879 "base_bdevs_list": [ 00:18:18.879 { 00:18:18.879 "name": null, 00:18:18.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.879 "is_configured": false, 00:18:18.879 "data_offset": 0, 00:18:18.879 "data_size": 7936 00:18:18.879 }, 00:18:18.879 { 00:18:18.879 "name": "BaseBdev2", 00:18:18.879 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:18.879 "is_configured": true, 00:18:18.879 "data_offset": 256, 00:18:18.879 "data_size": 7936 00:18:18.879 } 00:18:18.879 ] 00:18:18.879 }' 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.879 [2024-10-21 10:02:55.344695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:18.879 [2024-10-21 10:02:55.344782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.879 [2024-10-21 10:02:55.344811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:18.879 [2024-10-21 10:02:55.344823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.879 [2024-10-21 10:02:55.345412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.879 [2024-10-21 10:02:55.345432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:18.879 [2024-10-21 10:02:55.345535] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:18.879 [2024-10-21 10:02:55.345553] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:18.879 [2024-10-21 10:02:55.345586] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:18.879 [2024-10-21 10:02:55.345600] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:18.879 BaseBdev1 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.879 10:02:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.817 "name": "raid_bdev1", 00:18:19.817 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:19.817 "strip_size_kb": 0, 00:18:19.817 "state": "online", 00:18:19.817 "raid_level": "raid1", 00:18:19.817 "superblock": true, 00:18:19.817 "num_base_bdevs": 2, 00:18:19.817 "num_base_bdevs_discovered": 1, 00:18:19.817 "num_base_bdevs_operational": 1, 00:18:19.817 "base_bdevs_list": [ 00:18:19.817 { 00:18:19.817 "name": null, 00:18:19.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.817 "is_configured": false, 00:18:19.817 "data_offset": 0, 00:18:19.817 "data_size": 7936 00:18:19.817 }, 00:18:19.817 { 00:18:19.817 "name": "BaseBdev2", 00:18:19.817 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:19.817 "is_configured": true, 00:18:19.817 "data_offset": 256, 00:18:19.817 "data_size": 7936 00:18:19.817 } 00:18:19.817 ] 00:18:19.817 }' 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.817 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.387 "name": "raid_bdev1", 00:18:20.387 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:20.387 "strip_size_kb": 0, 00:18:20.387 "state": "online", 00:18:20.387 "raid_level": "raid1", 00:18:20.387 "superblock": true, 00:18:20.387 "num_base_bdevs": 2, 00:18:20.387 "num_base_bdevs_discovered": 1, 00:18:20.387 "num_base_bdevs_operational": 1, 00:18:20.387 "base_bdevs_list": [ 00:18:20.387 { 00:18:20.387 "name": null, 00:18:20.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.387 "is_configured": false, 00:18:20.387 "data_offset": 0, 00:18:20.387 "data_size": 7936 00:18:20.387 }, 00:18:20.387 { 00:18:20.387 "name": "BaseBdev2", 00:18:20.387 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:20.387 "is_configured": true, 00:18:20.387 "data_offset": 256, 00:18:20.387 "data_size": 7936 00:18:20.387 } 00:18:20.387 ] 00:18:20.387 }' 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.387 [2024-10-21 10:02:56.930207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.387 [2024-10-21 10:02:56.930445] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.387 [2024-10-21 10:02:56.930470] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:20.387 request: 00:18:20.387 { 00:18:20.387 "base_bdev": "BaseBdev1", 00:18:20.387 "raid_bdev": "raid_bdev1", 00:18:20.387 "method": "bdev_raid_add_base_bdev", 00:18:20.387 "req_id": 1 00:18:20.387 } 00:18:20.387 Got JSON-RPC error response 00:18:20.387 response: 00:18:20.387 { 00:18:20.387 "code": -22, 00:18:20.387 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:20.387 } 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.387 10:02:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.768 10:02:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.768 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.768 "name": "raid_bdev1", 00:18:21.768 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:21.768 "strip_size_kb": 0, 00:18:21.768 "state": "online", 00:18:21.768 "raid_level": "raid1", 00:18:21.768 "superblock": true, 00:18:21.768 "num_base_bdevs": 2, 00:18:21.768 "num_base_bdevs_discovered": 1, 00:18:21.768 "num_base_bdevs_operational": 1, 00:18:21.768 "base_bdevs_list": [ 00:18:21.768 { 00:18:21.768 "name": null, 00:18:21.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.768 "is_configured": false, 00:18:21.768 "data_offset": 0, 00:18:21.768 "data_size": 7936 00:18:21.768 }, 00:18:21.768 { 00:18:21.768 "name": "BaseBdev2", 00:18:21.768 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:21.768 "is_configured": true, 00:18:21.768 "data_offset": 256, 00:18:21.768 "data_size": 7936 00:18:21.768 } 00:18:21.768 ] 00:18:21.768 }' 00:18:21.768 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.768 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.029 "name": "raid_bdev1", 00:18:22.029 "uuid": "a3580343-3fed-43dc-98f8-8e4c77111cc0", 00:18:22.029 "strip_size_kb": 0, 00:18:22.029 "state": "online", 00:18:22.029 "raid_level": "raid1", 00:18:22.029 "superblock": true, 00:18:22.029 "num_base_bdevs": 2, 00:18:22.029 "num_base_bdevs_discovered": 1, 00:18:22.029 "num_base_bdevs_operational": 1, 00:18:22.029 "base_bdevs_list": [ 00:18:22.029 { 00:18:22.029 "name": null, 00:18:22.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.029 "is_configured": false, 00:18:22.029 "data_offset": 0, 00:18:22.029 "data_size": 7936 00:18:22.029 }, 00:18:22.029 { 00:18:22.029 "name": "BaseBdev2", 00:18:22.029 "uuid": "1984c5d7-a438-54fe-ac4a-7f06c4ff9f67", 00:18:22.029 "is_configured": true, 00:18:22.029 "data_offset": 256, 00:18:22.029 "data_size": 7936 00:18:22.029 } 00:18:22.029 ] 00:18:22.029 }' 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86183 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86183 ']' 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86183 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86183 00:18:22.029 killing process with pid 86183 00:18:22.029 Received shutdown signal, test time was about 60.000000 seconds 00:18:22.029 00:18:22.029 Latency(us) 00:18:22.029 [2024-10-21T10:02:58.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.029 [2024-10-21T10:02:58.624Z] =================================================================================================================== 00:18:22.029 [2024-10-21T10:02:58.624Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86183' 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86183 00:18:22.029 [2024-10-21 10:02:58.588684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.029 10:02:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86183 00:18:22.029 [2024-10-21 10:02:58.588861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.029 [2024-10-21 10:02:58.588928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.029 [2024-10-21 10:02:58.588944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state offline 00:18:22.661 [2024-10-21 10:02:58.992327] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.042 10:03:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:24.042 00:18:24.042 real 0m20.861s 00:18:24.042 user 0m26.888s 00:18:24.042 sys 0m2.873s 00:18:24.042 10:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.042 10:03:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.042 ************************************ 00:18:24.042 END TEST raid_rebuild_test_sb_4k 00:18:24.042 ************************************ 00:18:24.042 10:03:00 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:24.042 10:03:00 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:24.042 10:03:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:24.042 10:03:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.042 10:03:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.042 ************************************ 00:18:24.042 START TEST raid_state_function_test_sb_md_separate 00:18:24.042 ************************************ 00:18:24.042 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86892 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:24.043 Process raid pid: 86892 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86892' 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86892 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 86892 ']' 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.043 10:03:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.303 [2024-10-21 10:03:00.639141] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:18:24.303 [2024-10-21 10:03:00.639290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.303 [2024-10-21 10:03:00.805588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.562 [2024-10-21 10:03:00.973824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.822 [2024-10-21 10:03:01.282411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.822 [2024-10-21 10:03:01.282461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.082 [2024-10-21 10:03:01.487713] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:25.082 [2024-10-21 10:03:01.487782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:25.082 [2024-10-21 10:03:01.487795] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.082 [2024-10-21 10:03:01.487807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.082 "name": "Existed_Raid", 00:18:25.082 "uuid": "0c73c384-06f6-4150-bf50-4b62f3dd7471", 00:18:25.082 "strip_size_kb": 0, 00:18:25.082 "state": "configuring", 00:18:25.082 "raid_level": "raid1", 00:18:25.082 "superblock": true, 00:18:25.082 "num_base_bdevs": 2, 00:18:25.082 "num_base_bdevs_discovered": 0, 00:18:25.082 "num_base_bdevs_operational": 2, 00:18:25.082 "base_bdevs_list": [ 00:18:25.082 { 00:18:25.082 "name": "BaseBdev1", 00:18:25.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.082 "is_configured": false, 00:18:25.082 "data_offset": 0, 00:18:25.082 "data_size": 0 00:18:25.082 }, 00:18:25.082 { 00:18:25.082 "name": "BaseBdev2", 00:18:25.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.082 "is_configured": false, 00:18:25.082 "data_offset": 0, 00:18:25.082 "data_size": 0 00:18:25.082 } 00:18:25.082 ] 00:18:25.082 }' 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.082 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.652 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:25.652 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.652 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.652 [2024-10-21 10:03:01.955005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.652 [2024-10-21 10:03:01.955072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:18:25.652 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.652 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:25.652 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.652 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.652 [2024-10-21 10:03:01.967005] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:25.652 [2024-10-21 10:03:01.967066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:25.652 [2024-10-21 10:03:01.967077] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.652 [2024-10-21 10:03:01.967093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.652 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.652 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:25.652 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.653 10:03:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.653 [2024-10-21 10:03:02.035390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.653 BaseBdev1 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.653 [ 00:18:25.653 { 00:18:25.653 "name": "BaseBdev1", 00:18:25.653 "aliases": [ 00:18:25.653 "ea43e255-01e1-445f-8b1c-48a869ff596a" 00:18:25.653 ], 00:18:25.653 "product_name": "Malloc disk", 00:18:25.653 "block_size": 4096, 00:18:25.653 "num_blocks": 8192, 00:18:25.653 "uuid": "ea43e255-01e1-445f-8b1c-48a869ff596a", 00:18:25.653 "md_size": 32, 00:18:25.653 "md_interleave": false, 00:18:25.653 "dif_type": 0, 00:18:25.653 "assigned_rate_limits": { 00:18:25.653 "rw_ios_per_sec": 0, 00:18:25.653 "rw_mbytes_per_sec": 0, 00:18:25.653 "r_mbytes_per_sec": 0, 00:18:25.653 "w_mbytes_per_sec": 0 00:18:25.653 }, 00:18:25.653 "claimed": true, 00:18:25.653 "claim_type": "exclusive_write", 00:18:25.653 "zoned": false, 00:18:25.653 "supported_io_types": { 00:18:25.653 "read": true, 00:18:25.653 "write": true, 00:18:25.653 "unmap": true, 00:18:25.653 "flush": true, 00:18:25.653 "reset": true, 00:18:25.653 "nvme_admin": false, 00:18:25.653 "nvme_io": false, 00:18:25.653 "nvme_io_md": false, 00:18:25.653 "write_zeroes": true, 00:18:25.653 "zcopy": true, 00:18:25.653 "get_zone_info": false, 00:18:25.653 "zone_management": false, 00:18:25.653 "zone_append": false, 00:18:25.653 "compare": false, 00:18:25.653 "compare_and_write": false, 00:18:25.653 "abort": true, 00:18:25.653 "seek_hole": false, 00:18:25.653 "seek_data": false, 00:18:25.653 "copy": true, 00:18:25.653 "nvme_iov_md": false 00:18:25.653 }, 00:18:25.653 "memory_domains": [ 00:18:25.653 { 00:18:25.653 "dma_device_id": "system", 00:18:25.653 "dma_device_type": 1 00:18:25.653 }, 00:18:25.653 { 00:18:25.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.653 "dma_device_type": 2 00:18:25.653 } 00:18:25.653 ], 00:18:25.653 "driver_specific": {} 00:18:25.653 } 00:18:25.653 ] 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.653 "name": "Existed_Raid", 00:18:25.653 "uuid": "4ab1a5fe-dd77-43e2-a4ad-6fbfa240516c", 00:18:25.653 "strip_size_kb": 0, 00:18:25.653 "state": "configuring", 00:18:25.653 "raid_level": "raid1", 00:18:25.653 "superblock": true, 00:18:25.653 "num_base_bdevs": 2, 00:18:25.653 "num_base_bdevs_discovered": 1, 00:18:25.653 "num_base_bdevs_operational": 2, 00:18:25.653 "base_bdevs_list": [ 00:18:25.653 { 00:18:25.653 "name": "BaseBdev1", 00:18:25.653 "uuid": "ea43e255-01e1-445f-8b1c-48a869ff596a", 00:18:25.653 "is_configured": true, 00:18:25.653 "data_offset": 256, 00:18:25.653 "data_size": 7936 00:18:25.653 }, 00:18:25.653 { 00:18:25.653 "name": "BaseBdev2", 00:18:25.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.653 "is_configured": false, 00:18:25.653 "data_offset": 0, 00:18:25.653 "data_size": 0 00:18:25.653 } 00:18:25.653 ] 00:18:25.653 }' 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.653 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.223 [2024-10-21 10:03:02.514843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.223 [2024-10-21 10:03:02.514918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.223 [2024-10-21 10:03:02.526836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.223 [2024-10-21 10:03:02.529281] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.223 [2024-10-21 10:03:02.529335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.223 "name": "Existed_Raid", 00:18:26.223 "uuid": "8bcb3625-292b-4cac-9024-9259007a7d25", 00:18:26.223 "strip_size_kb": 0, 00:18:26.223 "state": "configuring", 00:18:26.223 "raid_level": "raid1", 00:18:26.223 "superblock": true, 00:18:26.223 "num_base_bdevs": 2, 00:18:26.223 "num_base_bdevs_discovered": 1, 00:18:26.223 "num_base_bdevs_operational": 2, 00:18:26.223 "base_bdevs_list": [ 00:18:26.223 { 00:18:26.223 "name": "BaseBdev1", 00:18:26.223 "uuid": "ea43e255-01e1-445f-8b1c-48a869ff596a", 00:18:26.223 "is_configured": true, 00:18:26.223 "data_offset": 256, 00:18:26.223 "data_size": 7936 00:18:26.223 }, 00:18:26.223 { 00:18:26.223 "name": "BaseBdev2", 00:18:26.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.223 "is_configured": false, 00:18:26.223 "data_offset": 0, 00:18:26.223 "data_size": 0 00:18:26.223 } 00:18:26.223 ] 00:18:26.223 }' 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.223 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.483 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:26.483 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.483 10:03:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.483 [2024-10-21 10:03:03.039258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.483 [2024-10-21 10:03:03.039584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:18:26.483 [2024-10-21 10:03:03.039603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:26.483 [2024-10-21 10:03:03.039713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:26.483 [2024-10-21 10:03:03.039887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:18:26.483 [2024-10-21 10:03:03.039907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:18:26.483 [2024-10-21 10:03:03.040036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.483 BaseBdev2 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.483 [ 00:18:26.483 { 00:18:26.483 "name": "BaseBdev2", 00:18:26.483 "aliases": [ 00:18:26.483 "3fe90046-d791-4c96-9ce7-29ed9a26b1cd" 00:18:26.483 ], 00:18:26.483 "product_name": "Malloc disk", 00:18:26.483 "block_size": 4096, 00:18:26.483 "num_blocks": 8192, 00:18:26.483 "uuid": "3fe90046-d791-4c96-9ce7-29ed9a26b1cd", 00:18:26.483 "md_size": 32, 00:18:26.483 "md_interleave": false, 00:18:26.483 "dif_type": 0, 00:18:26.483 "assigned_rate_limits": { 00:18:26.483 "rw_ios_per_sec": 0, 00:18:26.483 "rw_mbytes_per_sec": 0, 00:18:26.483 "r_mbytes_per_sec": 0, 00:18:26.483 "w_mbytes_per_sec": 0 00:18:26.483 }, 00:18:26.483 "claimed": true, 00:18:26.483 "claim_type": "exclusive_write", 00:18:26.483 "zoned": false, 00:18:26.483 "supported_io_types": { 00:18:26.483 "read": true, 00:18:26.483 "write": true, 00:18:26.483 "unmap": true, 00:18:26.483 "flush": true, 00:18:26.483 "reset": true, 00:18:26.483 "nvme_admin": false, 00:18:26.483 "nvme_io": false, 00:18:26.483 "nvme_io_md": false, 00:18:26.483 "write_zeroes": true, 00:18:26.483 "zcopy": true, 00:18:26.483 "get_zone_info": false, 00:18:26.483 "zone_management": false, 00:18:26.483 "zone_append": false, 00:18:26.483 "compare": false, 00:18:26.483 "compare_and_write": false, 00:18:26.483 "abort": true, 00:18:26.483 "seek_hole": false, 00:18:26.483 "seek_data": false, 00:18:26.483 "copy": true, 00:18:26.483 "nvme_iov_md": false 00:18:26.483 }, 00:18:26.483 "memory_domains": [ 00:18:26.483 { 00:18:26.483 "dma_device_id": "system", 00:18:26.483 "dma_device_type": 1 00:18:26.483 }, 00:18:26.483 { 00:18:26.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.483 "dma_device_type": 2 00:18:26.483 } 00:18:26.483 ], 00:18:26.483 "driver_specific": {} 00:18:26.483 } 00:18:26.483 ] 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:26.483 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:26.484 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:26.484 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.484 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.484 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.484 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.484 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.484 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.484 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.484 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.484 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.744 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.744 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.744 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.744 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.744 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.744 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.744 "name": "Existed_Raid", 00:18:26.744 "uuid": "8bcb3625-292b-4cac-9024-9259007a7d25", 00:18:26.744 "strip_size_kb": 0, 00:18:26.744 "state": "online", 00:18:26.744 "raid_level": "raid1", 00:18:26.744 "superblock": true, 00:18:26.744 "num_base_bdevs": 2, 00:18:26.744 "num_base_bdevs_discovered": 2, 00:18:26.744 "num_base_bdevs_operational": 2, 00:18:26.744 "base_bdevs_list": [ 00:18:26.744 { 00:18:26.744 "name": "BaseBdev1", 00:18:26.744 "uuid": "ea43e255-01e1-445f-8b1c-48a869ff596a", 00:18:26.744 "is_configured": true, 00:18:26.744 "data_offset": 256, 00:18:26.744 "data_size": 7936 00:18:26.744 }, 00:18:26.744 { 00:18:26.744 "name": "BaseBdev2", 00:18:26.744 "uuid": "3fe90046-d791-4c96-9ce7-29ed9a26b1cd", 00:18:26.744 "is_configured": true, 00:18:26.744 "data_offset": 256, 00:18:26.744 "data_size": 7936 00:18:26.744 } 00:18:26.744 ] 00:18:26.744 }' 00:18:26.744 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.744 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.004 [2024-10-21 10:03:03.511430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.004 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.005 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:27.005 "name": "Existed_Raid", 00:18:27.005 "aliases": [ 00:18:27.005 "8bcb3625-292b-4cac-9024-9259007a7d25" 00:18:27.005 ], 00:18:27.005 "product_name": "Raid Volume", 00:18:27.005 "block_size": 4096, 00:18:27.005 "num_blocks": 7936, 00:18:27.005 "uuid": "8bcb3625-292b-4cac-9024-9259007a7d25", 00:18:27.005 "md_size": 32, 00:18:27.005 "md_interleave": false, 00:18:27.005 "dif_type": 0, 00:18:27.005 "assigned_rate_limits": { 00:18:27.005 "rw_ios_per_sec": 0, 00:18:27.005 "rw_mbytes_per_sec": 0, 00:18:27.005 "r_mbytes_per_sec": 0, 00:18:27.005 "w_mbytes_per_sec": 0 00:18:27.005 }, 00:18:27.005 "claimed": false, 00:18:27.005 "zoned": false, 00:18:27.005 "supported_io_types": { 00:18:27.005 "read": true, 00:18:27.005 "write": true, 00:18:27.005 "unmap": false, 00:18:27.005 "flush": false, 00:18:27.005 "reset": true, 00:18:27.005 "nvme_admin": false, 00:18:27.005 "nvme_io": false, 00:18:27.005 "nvme_io_md": false, 00:18:27.005 "write_zeroes": true, 00:18:27.005 "zcopy": false, 00:18:27.005 "get_zone_info": false, 00:18:27.005 "zone_management": false, 00:18:27.005 "zone_append": false, 00:18:27.005 "compare": false, 00:18:27.005 "compare_and_write": false, 00:18:27.005 "abort": false, 00:18:27.005 "seek_hole": false, 00:18:27.005 "seek_data": false, 00:18:27.005 "copy": false, 00:18:27.005 "nvme_iov_md": false 00:18:27.005 }, 00:18:27.005 "memory_domains": [ 00:18:27.005 { 00:18:27.005 "dma_device_id": "system", 00:18:27.005 "dma_device_type": 1 00:18:27.005 }, 00:18:27.005 { 00:18:27.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.005 "dma_device_type": 2 00:18:27.005 }, 00:18:27.005 { 00:18:27.005 "dma_device_id": "system", 00:18:27.005 "dma_device_type": 1 00:18:27.005 }, 00:18:27.005 { 00:18:27.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.005 "dma_device_type": 2 00:18:27.005 } 00:18:27.005 ], 00:18:27.005 "driver_specific": { 00:18:27.005 "raid": { 00:18:27.005 "uuid": "8bcb3625-292b-4cac-9024-9259007a7d25", 00:18:27.005 "strip_size_kb": 0, 00:18:27.005 "state": "online", 00:18:27.005 "raid_level": "raid1", 00:18:27.005 "superblock": true, 00:18:27.005 "num_base_bdevs": 2, 00:18:27.005 "num_base_bdevs_discovered": 2, 00:18:27.005 "num_base_bdevs_operational": 2, 00:18:27.005 "base_bdevs_list": [ 00:18:27.005 { 00:18:27.005 "name": "BaseBdev1", 00:18:27.005 "uuid": "ea43e255-01e1-445f-8b1c-48a869ff596a", 00:18:27.005 "is_configured": true, 00:18:27.005 "data_offset": 256, 00:18:27.005 "data_size": 7936 00:18:27.005 }, 00:18:27.005 { 00:18:27.005 "name": "BaseBdev2", 00:18:27.005 "uuid": "3fe90046-d791-4c96-9ce7-29ed9a26b1cd", 00:18:27.005 "is_configured": true, 00:18:27.005 "data_offset": 256, 00:18:27.005 "data_size": 7936 00:18:27.005 } 00:18:27.005 ] 00:18:27.005 } 00:18:27.005 } 00:18:27.005 }' 00:18:27.005 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:27.265 BaseBdev2' 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.265 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.265 [2024-10-21 10:03:03.762536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.526 "name": "Existed_Raid", 00:18:27.526 "uuid": "8bcb3625-292b-4cac-9024-9259007a7d25", 00:18:27.526 "strip_size_kb": 0, 00:18:27.526 "state": "online", 00:18:27.526 "raid_level": "raid1", 00:18:27.526 "superblock": true, 00:18:27.526 "num_base_bdevs": 2, 00:18:27.526 "num_base_bdevs_discovered": 1, 00:18:27.526 "num_base_bdevs_operational": 1, 00:18:27.526 "base_bdevs_list": [ 00:18:27.526 { 00:18:27.526 "name": null, 00:18:27.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.526 "is_configured": false, 00:18:27.526 "data_offset": 0, 00:18:27.526 "data_size": 7936 00:18:27.526 }, 00:18:27.526 { 00:18:27.526 "name": "BaseBdev2", 00:18:27.526 "uuid": "3fe90046-d791-4c96-9ce7-29ed9a26b1cd", 00:18:27.526 "is_configured": true, 00:18:27.526 "data_offset": 256, 00:18:27.526 "data_size": 7936 00:18:27.526 } 00:18:27.526 ] 00:18:27.526 }' 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.526 10:03:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.787 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:27.787 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:27.787 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.787 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.787 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.787 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:27.787 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.047 [2024-10-21 10:03:04.391666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:28.047 [2024-10-21 10:03:04.391825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.047 [2024-10-21 10:03:04.531275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.047 [2024-10-21 10:03:04.531458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.047 [2024-10-21 10:03:04.531517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86892 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 86892 ']' 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 86892 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86892 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86892' 00:18:28.047 killing process with pid 86892 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 86892 00:18:28.047 [2024-10-21 10:03:04.630977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.047 10:03:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 86892 00:18:28.307 [2024-10-21 10:03:04.653258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.688 10:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:29.688 00:18:29.688 real 0m5.608s 00:18:29.688 user 0m7.729s 00:18:29.688 sys 0m1.015s 00:18:29.688 10:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.688 ************************************ 00:18:29.688 END TEST raid_state_function_test_sb_md_separate 00:18:29.688 ************************************ 00:18:29.688 10:03:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.688 10:03:06 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:29.688 10:03:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:29.688 10:03:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.688 10:03:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.688 ************************************ 00:18:29.688 START TEST raid_superblock_test_md_separate 00:18:29.688 ************************************ 00:18:29.688 10:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:29.688 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:29.688 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:29.688 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:29.688 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:29.688 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:29.688 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:29.688 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:29.688 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87140 00:18:29.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87140 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87140 ']' 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.689 10:03:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:29.947 [2024-10-21 10:03:06.312485] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:18:29.947 [2024-10-21 10:03:06.312631] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87140 ] 00:18:29.947 [2024-10-21 10:03:06.475059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.207 [2024-10-21 10:03:06.643858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.467 [2024-10-21 10:03:06.945258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.467 [2024-10-21 10:03:06.945333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.725 malloc1 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.725 [2024-10-21 10:03:07.242707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:30.725 [2024-10-21 10:03:07.242788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.725 [2024-10-21 10:03:07.242813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:30.725 [2024-10-21 10:03:07.242825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.725 [2024-10-21 10:03:07.245396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.725 [2024-10-21 10:03:07.245441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:30.725 pt1 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:30.725 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:30.726 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:30.726 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:30.726 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.726 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.726 malloc2 00:18:30.726 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.726 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.726 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.726 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.726 [2024-10-21 10:03:07.317237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.726 [2024-10-21 10:03:07.317302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.726 [2024-10-21 10:03:07.317343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:18:30.726 [2024-10-21 10:03:07.317354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.726 [2024-10-21 10:03:07.319949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.726 [2024-10-21 10:03:07.319990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.985 pt2 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.985 [2024-10-21 10:03:07.329315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:30.985 [2024-10-21 10:03:07.331872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.985 [2024-10-21 10:03:07.332091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:18:30.985 [2024-10-21 10:03:07.332107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:30.985 [2024-10-21 10:03:07.332213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:30.985 [2024-10-21 10:03:07.332380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:18:30.985 [2024-10-21 10:03:07.332394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:18:30.985 [2024-10-21 10:03:07.332532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.985 "name": "raid_bdev1", 00:18:30.985 "uuid": "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee", 00:18:30.985 "strip_size_kb": 0, 00:18:30.985 "state": "online", 00:18:30.985 "raid_level": "raid1", 00:18:30.985 "superblock": true, 00:18:30.985 "num_base_bdevs": 2, 00:18:30.985 "num_base_bdevs_discovered": 2, 00:18:30.985 "num_base_bdevs_operational": 2, 00:18:30.985 "base_bdevs_list": [ 00:18:30.985 { 00:18:30.985 "name": "pt1", 00:18:30.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.985 "is_configured": true, 00:18:30.985 "data_offset": 256, 00:18:30.985 "data_size": 7936 00:18:30.985 }, 00:18:30.985 { 00:18:30.985 "name": "pt2", 00:18:30.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.985 "is_configured": true, 00:18:30.985 "data_offset": 256, 00:18:30.985 "data_size": 7936 00:18:30.985 } 00:18:30.985 ] 00:18:30.985 }' 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.985 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.246 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:31.246 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:31.246 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:31.246 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:31.246 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:31.246 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:31.246 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.246 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.246 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.246 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:31.246 [2024-10-21 10:03:07.828850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.506 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.506 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:31.506 "name": "raid_bdev1", 00:18:31.506 "aliases": [ 00:18:31.506 "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee" 00:18:31.506 ], 00:18:31.506 "product_name": "Raid Volume", 00:18:31.506 "block_size": 4096, 00:18:31.506 "num_blocks": 7936, 00:18:31.506 "uuid": "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee", 00:18:31.506 "md_size": 32, 00:18:31.506 "md_interleave": false, 00:18:31.506 "dif_type": 0, 00:18:31.506 "assigned_rate_limits": { 00:18:31.506 "rw_ios_per_sec": 0, 00:18:31.506 "rw_mbytes_per_sec": 0, 00:18:31.506 "r_mbytes_per_sec": 0, 00:18:31.506 "w_mbytes_per_sec": 0 00:18:31.506 }, 00:18:31.506 "claimed": false, 00:18:31.506 "zoned": false, 00:18:31.506 "supported_io_types": { 00:18:31.506 "read": true, 00:18:31.506 "write": true, 00:18:31.506 "unmap": false, 00:18:31.506 "flush": false, 00:18:31.506 "reset": true, 00:18:31.506 "nvme_admin": false, 00:18:31.506 "nvme_io": false, 00:18:31.506 "nvme_io_md": false, 00:18:31.506 "write_zeroes": true, 00:18:31.506 "zcopy": false, 00:18:31.506 "get_zone_info": false, 00:18:31.506 "zone_management": false, 00:18:31.506 "zone_append": false, 00:18:31.506 "compare": false, 00:18:31.506 "compare_and_write": false, 00:18:31.506 "abort": false, 00:18:31.507 "seek_hole": false, 00:18:31.507 "seek_data": false, 00:18:31.507 "copy": false, 00:18:31.507 "nvme_iov_md": false 00:18:31.507 }, 00:18:31.507 "memory_domains": [ 00:18:31.507 { 00:18:31.507 "dma_device_id": "system", 00:18:31.507 "dma_device_type": 1 00:18:31.507 }, 00:18:31.507 { 00:18:31.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.507 "dma_device_type": 2 00:18:31.507 }, 00:18:31.507 { 00:18:31.507 "dma_device_id": "system", 00:18:31.507 "dma_device_type": 1 00:18:31.507 }, 00:18:31.507 { 00:18:31.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.507 "dma_device_type": 2 00:18:31.507 } 00:18:31.507 ], 00:18:31.507 "driver_specific": { 00:18:31.507 "raid": { 00:18:31.507 "uuid": "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee", 00:18:31.507 "strip_size_kb": 0, 00:18:31.507 "state": "online", 00:18:31.507 "raid_level": "raid1", 00:18:31.507 "superblock": true, 00:18:31.507 "num_base_bdevs": 2, 00:18:31.507 "num_base_bdevs_discovered": 2, 00:18:31.507 "num_base_bdevs_operational": 2, 00:18:31.507 "base_bdevs_list": [ 00:18:31.507 { 00:18:31.507 "name": "pt1", 00:18:31.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.507 "is_configured": true, 00:18:31.507 "data_offset": 256, 00:18:31.507 "data_size": 7936 00:18:31.507 }, 00:18:31.507 { 00:18:31.507 "name": "pt2", 00:18:31.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.507 "is_configured": true, 00:18:31.507 "data_offset": 256, 00:18:31.507 "data_size": 7936 00:18:31.507 } 00:18:31.507 ] 00:18:31.507 } 00:18:31.507 } 00:18:31.507 }' 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:31.507 pt2' 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.507 10:03:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.507 [2024-10-21 10:03:08.036362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee ']' 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.507 [2024-10-21 10:03:08.079964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.507 [2024-10-21 10:03:08.080044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.507 [2024-10-21 10:03:08.080191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.507 [2024-10-21 10:03:08.080304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.507 [2024-10-21 10:03:08.080372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.507 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 [2024-10-21 10:03:08.207814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:31.768 [2024-10-21 10:03:08.210365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:31.768 [2024-10-21 10:03:08.210476] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:31.768 [2024-10-21 10:03:08.210537] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:31.768 [2024-10-21 10:03:08.210554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.768 [2024-10-21 10:03:08.210581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:18:31.768 request: 00:18:31.768 { 00:18:31.768 "name": "raid_bdev1", 00:18:31.768 "raid_level": "raid1", 00:18:31.768 "base_bdevs": [ 00:18:31.768 "malloc1", 00:18:31.768 "malloc2" 00:18:31.768 ], 00:18:31.768 "superblock": false, 00:18:31.768 "method": "bdev_raid_create", 00:18:31.768 "req_id": 1 00:18:31.768 } 00:18:31.768 Got JSON-RPC error response 00:18:31.768 response: 00:18:31.768 { 00:18:31.768 "code": -17, 00:18:31.768 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:31.768 } 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.768 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 [2024-10-21 10:03:08.267679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.768 [2024-10-21 10:03:08.267793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.768 [2024-10-21 10:03:08.267833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:18:31.768 [2024-10-21 10:03:08.267870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.768 [2024-10-21 10:03:08.270458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.768 [2024-10-21 10:03:08.270543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.768 [2024-10-21 10:03:08.270639] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:31.769 [2024-10-21 10:03:08.270730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.769 pt1 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.769 "name": "raid_bdev1", 00:18:31.769 "uuid": "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee", 00:18:31.769 "strip_size_kb": 0, 00:18:31.769 "state": "configuring", 00:18:31.769 "raid_level": "raid1", 00:18:31.769 "superblock": true, 00:18:31.769 "num_base_bdevs": 2, 00:18:31.769 "num_base_bdevs_discovered": 1, 00:18:31.769 "num_base_bdevs_operational": 2, 00:18:31.769 "base_bdevs_list": [ 00:18:31.769 { 00:18:31.769 "name": "pt1", 00:18:31.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.769 "is_configured": true, 00:18:31.769 "data_offset": 256, 00:18:31.769 "data_size": 7936 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "name": null, 00:18:31.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.769 "is_configured": false, 00:18:31.769 "data_offset": 256, 00:18:31.769 "data_size": 7936 00:18:31.769 } 00:18:31.769 ] 00:18:31.769 }' 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.769 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.340 [2024-10-21 10:03:08.762969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:32.340 [2024-10-21 10:03:08.763090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.340 [2024-10-21 10:03:08.763114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:32.340 [2024-10-21 10:03:08.763129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.340 [2024-10-21 10:03:08.763462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.340 [2024-10-21 10:03:08.763484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:32.340 [2024-10-21 10:03:08.763549] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:32.340 [2024-10-21 10:03:08.763579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:32.340 [2024-10-21 10:03:08.763743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:18:32.340 [2024-10-21 10:03:08.763757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:32.340 [2024-10-21 10:03:08.763847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:32.340 [2024-10-21 10:03:08.763979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:18:32.340 [2024-10-21 10:03:08.763997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:18:32.340 [2024-10-21 10:03:08.764113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.340 pt2 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.340 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.341 "name": "raid_bdev1", 00:18:32.341 "uuid": "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee", 00:18:32.341 "strip_size_kb": 0, 00:18:32.341 "state": "online", 00:18:32.341 "raid_level": "raid1", 00:18:32.341 "superblock": true, 00:18:32.341 "num_base_bdevs": 2, 00:18:32.341 "num_base_bdevs_discovered": 2, 00:18:32.341 "num_base_bdevs_operational": 2, 00:18:32.341 "base_bdevs_list": [ 00:18:32.341 { 00:18:32.341 "name": "pt1", 00:18:32.341 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.341 "is_configured": true, 00:18:32.341 "data_offset": 256, 00:18:32.341 "data_size": 7936 00:18:32.341 }, 00:18:32.341 { 00:18:32.341 "name": "pt2", 00:18:32.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.341 "is_configured": true, 00:18:32.341 "data_offset": 256, 00:18:32.341 "data_size": 7936 00:18:32.341 } 00:18:32.341 ] 00:18:32.341 }' 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.341 10:03:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.912 [2024-10-21 10:03:09.214503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.912 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:32.912 "name": "raid_bdev1", 00:18:32.912 "aliases": [ 00:18:32.912 "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee" 00:18:32.912 ], 00:18:32.912 "product_name": "Raid Volume", 00:18:32.912 "block_size": 4096, 00:18:32.912 "num_blocks": 7936, 00:18:32.912 "uuid": "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee", 00:18:32.912 "md_size": 32, 00:18:32.912 "md_interleave": false, 00:18:32.912 "dif_type": 0, 00:18:32.912 "assigned_rate_limits": { 00:18:32.912 "rw_ios_per_sec": 0, 00:18:32.912 "rw_mbytes_per_sec": 0, 00:18:32.912 "r_mbytes_per_sec": 0, 00:18:32.912 "w_mbytes_per_sec": 0 00:18:32.912 }, 00:18:32.912 "claimed": false, 00:18:32.912 "zoned": false, 00:18:32.912 "supported_io_types": { 00:18:32.912 "read": true, 00:18:32.912 "write": true, 00:18:32.912 "unmap": false, 00:18:32.912 "flush": false, 00:18:32.912 "reset": true, 00:18:32.912 "nvme_admin": false, 00:18:32.912 "nvme_io": false, 00:18:32.912 "nvme_io_md": false, 00:18:32.912 "write_zeroes": true, 00:18:32.912 "zcopy": false, 00:18:32.912 "get_zone_info": false, 00:18:32.912 "zone_management": false, 00:18:32.912 "zone_append": false, 00:18:32.912 "compare": false, 00:18:32.912 "compare_and_write": false, 00:18:32.912 "abort": false, 00:18:32.912 "seek_hole": false, 00:18:32.912 "seek_data": false, 00:18:32.912 "copy": false, 00:18:32.912 "nvme_iov_md": false 00:18:32.912 }, 00:18:32.912 "memory_domains": [ 00:18:32.912 { 00:18:32.912 "dma_device_id": "system", 00:18:32.912 "dma_device_type": 1 00:18:32.912 }, 00:18:32.912 { 00:18:32.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.912 "dma_device_type": 2 00:18:32.912 }, 00:18:32.912 { 00:18:32.912 "dma_device_id": "system", 00:18:32.912 "dma_device_type": 1 00:18:32.912 }, 00:18:32.912 { 00:18:32.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.913 "dma_device_type": 2 00:18:32.913 } 00:18:32.913 ], 00:18:32.913 "driver_specific": { 00:18:32.913 "raid": { 00:18:32.913 "uuid": "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee", 00:18:32.913 "strip_size_kb": 0, 00:18:32.913 "state": "online", 00:18:32.913 "raid_level": "raid1", 00:18:32.913 "superblock": true, 00:18:32.913 "num_base_bdevs": 2, 00:18:32.913 "num_base_bdevs_discovered": 2, 00:18:32.913 "num_base_bdevs_operational": 2, 00:18:32.913 "base_bdevs_list": [ 00:18:32.913 { 00:18:32.913 "name": "pt1", 00:18:32.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.913 "is_configured": true, 00:18:32.913 "data_offset": 256, 00:18:32.913 "data_size": 7936 00:18:32.913 }, 00:18:32.913 { 00:18:32.913 "name": "pt2", 00:18:32.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.913 "is_configured": true, 00:18:32.913 "data_offset": 256, 00:18:32.913 "data_size": 7936 00:18:32.913 } 00:18:32.913 ] 00:18:32.913 } 00:18:32.913 } 00:18:32.913 }' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:32.913 pt2' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.913 [2024-10-21 10:03:09.466090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee '!=' e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee ']' 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.913 [2024-10-21 10:03:09.497796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.913 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.174 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.174 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.174 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.174 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.174 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.174 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.174 "name": "raid_bdev1", 00:18:33.174 "uuid": "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee", 00:18:33.174 "strip_size_kb": 0, 00:18:33.174 "state": "online", 00:18:33.174 "raid_level": "raid1", 00:18:33.174 "superblock": true, 00:18:33.174 "num_base_bdevs": 2, 00:18:33.174 "num_base_bdevs_discovered": 1, 00:18:33.174 "num_base_bdevs_operational": 1, 00:18:33.174 "base_bdevs_list": [ 00:18:33.174 { 00:18:33.174 "name": null, 00:18:33.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.174 "is_configured": false, 00:18:33.174 "data_offset": 0, 00:18:33.174 "data_size": 7936 00:18:33.174 }, 00:18:33.174 { 00:18:33.174 "name": "pt2", 00:18:33.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.174 "is_configured": true, 00:18:33.174 "data_offset": 256, 00:18:33.174 "data_size": 7936 00:18:33.174 } 00:18:33.174 ] 00:18:33.174 }' 00:18:33.174 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.174 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.433 [2024-10-21 10:03:09.893127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.433 [2024-10-21 10:03:09.893278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.433 [2024-10-21 10:03:09.893410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.433 [2024-10-21 10:03:09.893493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.433 [2024-10-21 10:03:09.893554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.433 [2024-10-21 10:03:09.968918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:33.433 [2024-10-21 10:03:09.968993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.433 [2024-10-21 10:03:09.969012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:33.433 [2024-10-21 10:03:09.969025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.433 [2024-10-21 10:03:09.971462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.433 [2024-10-21 10:03:09.971561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:33.433 [2024-10-21 10:03:09.971640] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:33.433 [2024-10-21 10:03:09.971702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.433 [2024-10-21 10:03:09.971817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:18:33.433 [2024-10-21 10:03:09.971830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:33.433 [2024-10-21 10:03:09.971916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:33.433 [2024-10-21 10:03:09.972040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:18:33.433 [2024-10-21 10:03:09.972047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:18:33.433 [2024-10-21 10:03:09.972159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.433 pt2 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.433 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.434 10:03:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.434 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.434 "name": "raid_bdev1", 00:18:33.434 "uuid": "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee", 00:18:33.434 "strip_size_kb": 0, 00:18:33.434 "state": "online", 00:18:33.434 "raid_level": "raid1", 00:18:33.434 "superblock": true, 00:18:33.434 "num_base_bdevs": 2, 00:18:33.434 "num_base_bdevs_discovered": 1, 00:18:33.434 "num_base_bdevs_operational": 1, 00:18:33.434 "base_bdevs_list": [ 00:18:33.434 { 00:18:33.434 "name": null, 00:18:33.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.434 "is_configured": false, 00:18:33.434 "data_offset": 256, 00:18:33.434 "data_size": 7936 00:18:33.434 }, 00:18:33.434 { 00:18:33.434 "name": "pt2", 00:18:33.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.434 "is_configured": true, 00:18:33.434 "data_offset": 256, 00:18:33.434 "data_size": 7936 00:18:33.434 } 00:18:33.434 ] 00:18:33.434 }' 00:18:33.434 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.434 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.002 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:34.002 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.002 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.002 [2024-10-21 10:03:10.396242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.002 [2024-10-21 10:03:10.396394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.002 [2024-10-21 10:03:10.396516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.002 [2024-10-21 10:03:10.396611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.002 [2024-10-21 10:03:10.396698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:18:34.002 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.002 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.002 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:34.002 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.002 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.002 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.003 [2024-10-21 10:03:10.460144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.003 [2024-10-21 10:03:10.460299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.003 [2024-10-21 10:03:10.460346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:34.003 [2024-10-21 10:03:10.460386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.003 [2024-10-21 10:03:10.463131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.003 [2024-10-21 10:03:10.463212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.003 [2024-10-21 10:03:10.463308] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:34.003 pt1 00:18:34.003 [2024-10-21 10:03:10.463406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:34.003 [2024-10-21 10:03:10.463582] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:34.003 [2024-10-21 10:03:10.463598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.003 [2024-10-21 10:03:10.463623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state configuring 00:18:34.003 [2024-10-21 10:03:10.463708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:34.003 [2024-10-21 10:03:10.463790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:18:34.003 [2024-10-21 10:03:10.463801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:34.003 [2024-10-21 10:03:10.463908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:34.003 [2024-10-21 10:03:10.464038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:18:34.003 [2024-10-21 10:03:10.464050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:18:34.003 [2024-10-21 10:03:10.464202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.003 "name": "raid_bdev1", 00:18:34.003 "uuid": "e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee", 00:18:34.003 "strip_size_kb": 0, 00:18:34.003 "state": "online", 00:18:34.003 "raid_level": "raid1", 00:18:34.003 "superblock": true, 00:18:34.003 "num_base_bdevs": 2, 00:18:34.003 "num_base_bdevs_discovered": 1, 00:18:34.003 "num_base_bdevs_operational": 1, 00:18:34.003 "base_bdevs_list": [ 00:18:34.003 { 00:18:34.003 "name": null, 00:18:34.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.003 "is_configured": false, 00:18:34.003 "data_offset": 256, 00:18:34.003 "data_size": 7936 00:18:34.003 }, 00:18:34.003 { 00:18:34.003 "name": "pt2", 00:18:34.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.003 "is_configured": true, 00:18:34.003 "data_offset": 256, 00:18:34.003 "data_size": 7936 00:18:34.003 } 00:18:34.003 ] 00:18:34.003 }' 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.003 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.574 [2024-10-21 10:03:10.923792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee '!=' e50eadab-f1ce-425e-9c6b-f8aef3b6b7ee ']' 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87140 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87140 ']' 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 87140 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87140 00:18:34.574 killing process with pid 87140 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87140' 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 87140 00:18:34.574 [2024-10-21 10:03:10.995844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.574 10:03:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 87140 00:18:34.574 [2024-10-21 10:03:10.995978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.574 [2024-10-21 10:03:10.996043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.574 [2024-10-21 10:03:10.996061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:18:34.846 [2024-10-21 10:03:11.295491] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:36.242 10:03:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:36.242 00:18:36.242 real 0m6.569s 00:18:36.242 user 0m9.550s 00:18:36.242 sys 0m1.211s 00:18:36.242 10:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:36.242 ************************************ 00:18:36.242 END TEST raid_superblock_test_md_separate 00:18:36.242 ************************************ 00:18:36.242 10:03:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.502 10:03:12 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:36.502 10:03:12 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:36.502 10:03:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:36.502 10:03:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.502 10:03:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.502 ************************************ 00:18:36.502 START TEST raid_rebuild_test_sb_md_separate 00:18:36.502 ************************************ 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87475 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87475 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87475 ']' 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.502 10:03:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.502 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:36.502 Zero copy mechanism will not be used. 00:18:36.502 [2024-10-21 10:03:12.958391] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:18:36.502 [2024-10-21 10:03:12.958511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87475 ] 00:18:36.762 [2024-10-21 10:03:13.121236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.762 [2024-10-21 10:03:13.289918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.022 [2024-10-21 10:03:13.598635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.022 [2024-10-21 10:03:13.598695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.282 BaseBdev1_malloc 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.282 [2024-10-21 10:03:13.852859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:37.282 [2024-10-21 10:03:13.852949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.282 [2024-10-21 10:03:13.852979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:37.282 [2024-10-21 10:03:13.852994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.282 [2024-10-21 10:03:13.855605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.282 [2024-10-21 10:03:13.855649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:37.282 BaseBdev1 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.282 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.542 BaseBdev2_malloc 00:18:37.542 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.542 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:37.542 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.542 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.542 [2024-10-21 10:03:13.927548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:37.542 [2024-10-21 10:03:13.927647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.542 [2024-10-21 10:03:13.927677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:18:37.542 [2024-10-21 10:03:13.927691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.542 [2024-10-21 10:03:13.930188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.542 [2024-10-21 10:03:13.930231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:37.542 BaseBdev2 00:18:37.542 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.542 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:37.542 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.542 10:03:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.542 spare_malloc 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.542 spare_delay 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.542 [2024-10-21 10:03:14.025585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:37.542 [2024-10-21 10:03:14.025671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.542 [2024-10-21 10:03:14.025695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:37.542 [2024-10-21 10:03:14.025709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.542 [2024-10-21 10:03:14.028291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.542 [2024-10-21 10:03:14.028337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:37.542 spare 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.542 [2024-10-21 10:03:14.037635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:37.542 [2024-10-21 10:03:14.040072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.542 [2024-10-21 10:03:14.040302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:18:37.542 [2024-10-21 10:03:14.040321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:37.542 [2024-10-21 10:03:14.040410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:37.542 [2024-10-21 10:03:14.040557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:18:37.542 [2024-10-21 10:03:14.040589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:18:37.542 [2024-10-21 10:03:14.040709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.542 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.543 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.543 "name": "raid_bdev1", 00:18:37.543 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:37.543 "strip_size_kb": 0, 00:18:37.543 "state": "online", 00:18:37.543 "raid_level": "raid1", 00:18:37.543 "superblock": true, 00:18:37.543 "num_base_bdevs": 2, 00:18:37.543 "num_base_bdevs_discovered": 2, 00:18:37.543 "num_base_bdevs_operational": 2, 00:18:37.543 "base_bdevs_list": [ 00:18:37.543 { 00:18:37.543 "name": "BaseBdev1", 00:18:37.543 "uuid": "ef481691-0e20-5e65-bfe0-768ecd4f8b0d", 00:18:37.543 "is_configured": true, 00:18:37.543 "data_offset": 256, 00:18:37.543 "data_size": 7936 00:18:37.543 }, 00:18:37.543 { 00:18:37.543 "name": "BaseBdev2", 00:18:37.543 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:37.543 "is_configured": true, 00:18:37.543 "data_offset": 256, 00:18:37.543 "data_size": 7936 00:18:37.543 } 00:18:37.543 ] 00:18:37.543 }' 00:18:37.543 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.543 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.112 [2024-10-21 10:03:14.509251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:38.112 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:38.113 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:38.113 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:38.113 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:38.113 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:38.113 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:38.113 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:38.113 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:38.113 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:38.372 [2024-10-21 10:03:14.804510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:38.372 /dev/nbd0 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:38.372 1+0 records in 00:18:38.372 1+0 records out 00:18:38.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336979 s, 12.2 MB/s 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:38.372 10:03:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:39.310 7936+0 records in 00:18:39.310 7936+0 records out 00:18:39.310 32505856 bytes (33 MB, 31 MiB) copied, 0.755274 s, 43.0 MB/s 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:39.310 [2024-10-21 10:03:15.877666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.310 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.569 [2024-10-21 10:03:15.913727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.569 "name": "raid_bdev1", 00:18:39.569 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:39.569 "strip_size_kb": 0, 00:18:39.569 "state": "online", 00:18:39.569 "raid_level": "raid1", 00:18:39.569 "superblock": true, 00:18:39.569 "num_base_bdevs": 2, 00:18:39.569 "num_base_bdevs_discovered": 1, 00:18:39.569 "num_base_bdevs_operational": 1, 00:18:39.569 "base_bdevs_list": [ 00:18:39.569 { 00:18:39.569 "name": null, 00:18:39.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.569 "is_configured": false, 00:18:39.569 "data_offset": 0, 00:18:39.569 "data_size": 7936 00:18:39.569 }, 00:18:39.569 { 00:18:39.569 "name": "BaseBdev2", 00:18:39.569 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:39.569 "is_configured": true, 00:18:39.569 "data_offset": 256, 00:18:39.569 "data_size": 7936 00:18:39.569 } 00:18:39.569 ] 00:18:39.569 }' 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.569 10:03:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.829 10:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:39.829 10:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.829 10:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.829 [2024-10-21 10:03:16.392983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.829 [2024-10-21 10:03:16.414946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018cff0 00:18:39.829 10:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.829 10:03:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:39.829 [2024-10-21 10:03:16.417533] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.212 "name": "raid_bdev1", 00:18:41.212 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:41.212 "strip_size_kb": 0, 00:18:41.212 "state": "online", 00:18:41.212 "raid_level": "raid1", 00:18:41.212 "superblock": true, 00:18:41.212 "num_base_bdevs": 2, 00:18:41.212 "num_base_bdevs_discovered": 2, 00:18:41.212 "num_base_bdevs_operational": 2, 00:18:41.212 "process": { 00:18:41.212 "type": "rebuild", 00:18:41.212 "target": "spare", 00:18:41.212 "progress": { 00:18:41.212 "blocks": 2560, 00:18:41.212 "percent": 32 00:18:41.212 } 00:18:41.212 }, 00:18:41.212 "base_bdevs_list": [ 00:18:41.212 { 00:18:41.212 "name": "spare", 00:18:41.212 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:41.212 "is_configured": true, 00:18:41.212 "data_offset": 256, 00:18:41.212 "data_size": 7936 00:18:41.212 }, 00:18:41.212 { 00:18:41.212 "name": "BaseBdev2", 00:18:41.212 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:41.212 "is_configured": true, 00:18:41.212 "data_offset": 256, 00:18:41.212 "data_size": 7936 00:18:41.212 } 00:18:41.212 ] 00:18:41.212 }' 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.212 [2024-10-21 10:03:17.569975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.212 [2024-10-21 10:03:17.627811] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:41.212 [2024-10-21 10:03:17.628036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.212 [2024-10-21 10:03:17.628057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.212 [2024-10-21 10:03:17.628071] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.212 "name": "raid_bdev1", 00:18:41.212 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:41.212 "strip_size_kb": 0, 00:18:41.212 "state": "online", 00:18:41.212 "raid_level": "raid1", 00:18:41.212 "superblock": true, 00:18:41.212 "num_base_bdevs": 2, 00:18:41.212 "num_base_bdevs_discovered": 1, 00:18:41.212 "num_base_bdevs_operational": 1, 00:18:41.212 "base_bdevs_list": [ 00:18:41.212 { 00:18:41.212 "name": null, 00:18:41.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.212 "is_configured": false, 00:18:41.212 "data_offset": 0, 00:18:41.212 "data_size": 7936 00:18:41.212 }, 00:18:41.212 { 00:18:41.212 "name": "BaseBdev2", 00:18:41.212 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:41.212 "is_configured": true, 00:18:41.212 "data_offset": 256, 00:18:41.212 "data_size": 7936 00:18:41.212 } 00:18:41.212 ] 00:18:41.212 }' 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.212 10:03:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.782 "name": "raid_bdev1", 00:18:41.782 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:41.782 "strip_size_kb": 0, 00:18:41.782 "state": "online", 00:18:41.782 "raid_level": "raid1", 00:18:41.782 "superblock": true, 00:18:41.782 "num_base_bdevs": 2, 00:18:41.782 "num_base_bdevs_discovered": 1, 00:18:41.782 "num_base_bdevs_operational": 1, 00:18:41.782 "base_bdevs_list": [ 00:18:41.782 { 00:18:41.782 "name": null, 00:18:41.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.782 "is_configured": false, 00:18:41.782 "data_offset": 0, 00:18:41.782 "data_size": 7936 00:18:41.782 }, 00:18:41.782 { 00:18:41.782 "name": "BaseBdev2", 00:18:41.782 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:41.782 "is_configured": true, 00:18:41.782 "data_offset": 256, 00:18:41.782 "data_size": 7936 00:18:41.782 } 00:18:41.782 ] 00:18:41.782 }' 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.782 [2024-10-21 10:03:18.236783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.782 [2024-10-21 10:03:18.255765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.782 10:03:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:41.782 [2024-10-21 10:03:18.258306] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.722 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.722 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.722 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.722 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.722 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.722 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.722 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.722 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.722 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.722 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.982 "name": "raid_bdev1", 00:18:42.982 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:42.982 "strip_size_kb": 0, 00:18:42.982 "state": "online", 00:18:42.982 "raid_level": "raid1", 00:18:42.982 "superblock": true, 00:18:42.982 "num_base_bdevs": 2, 00:18:42.982 "num_base_bdevs_discovered": 2, 00:18:42.982 "num_base_bdevs_operational": 2, 00:18:42.982 "process": { 00:18:42.982 "type": "rebuild", 00:18:42.982 "target": "spare", 00:18:42.982 "progress": { 00:18:42.982 "blocks": 2560, 00:18:42.982 "percent": 32 00:18:42.982 } 00:18:42.982 }, 00:18:42.982 "base_bdevs_list": [ 00:18:42.982 { 00:18:42.982 "name": "spare", 00:18:42.982 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:42.982 "is_configured": true, 00:18:42.982 "data_offset": 256, 00:18:42.982 "data_size": 7936 00:18:42.982 }, 00:18:42.982 { 00:18:42.982 "name": "BaseBdev2", 00:18:42.982 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:42.982 "is_configured": true, 00:18:42.982 "data_offset": 256, 00:18:42.982 "data_size": 7936 00:18:42.982 } 00:18:42.982 ] 00:18:42.982 }' 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:42.982 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=726 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.982 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.982 "name": "raid_bdev1", 00:18:42.982 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:42.982 "strip_size_kb": 0, 00:18:42.982 "state": "online", 00:18:42.982 "raid_level": "raid1", 00:18:42.982 "superblock": true, 00:18:42.982 "num_base_bdevs": 2, 00:18:42.982 "num_base_bdevs_discovered": 2, 00:18:42.982 "num_base_bdevs_operational": 2, 00:18:42.982 "process": { 00:18:42.982 "type": "rebuild", 00:18:42.982 "target": "spare", 00:18:42.982 "progress": { 00:18:42.982 "blocks": 2816, 00:18:42.982 "percent": 35 00:18:42.982 } 00:18:42.982 }, 00:18:42.982 "base_bdevs_list": [ 00:18:42.982 { 00:18:42.982 "name": "spare", 00:18:42.982 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:42.982 "is_configured": true, 00:18:42.982 "data_offset": 256, 00:18:42.982 "data_size": 7936 00:18:42.982 }, 00:18:42.982 { 00:18:42.982 "name": "BaseBdev2", 00:18:42.982 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:42.982 "is_configured": true, 00:18:42.982 "data_offset": 256, 00:18:42.982 "data_size": 7936 00:18:42.982 } 00:18:42.983 ] 00:18:42.983 }' 00:18:42.983 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.983 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.983 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.983 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.983 10:03:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.365 "name": "raid_bdev1", 00:18:44.365 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:44.365 "strip_size_kb": 0, 00:18:44.365 "state": "online", 00:18:44.365 "raid_level": "raid1", 00:18:44.365 "superblock": true, 00:18:44.365 "num_base_bdevs": 2, 00:18:44.365 "num_base_bdevs_discovered": 2, 00:18:44.365 "num_base_bdevs_operational": 2, 00:18:44.365 "process": { 00:18:44.365 "type": "rebuild", 00:18:44.365 "target": "spare", 00:18:44.365 "progress": { 00:18:44.365 "blocks": 5632, 00:18:44.365 "percent": 70 00:18:44.365 } 00:18:44.365 }, 00:18:44.365 "base_bdevs_list": [ 00:18:44.365 { 00:18:44.365 "name": "spare", 00:18:44.365 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:44.365 "is_configured": true, 00:18:44.365 "data_offset": 256, 00:18:44.365 "data_size": 7936 00:18:44.365 }, 00:18:44.365 { 00:18:44.365 "name": "BaseBdev2", 00:18:44.365 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:44.365 "is_configured": true, 00:18:44.365 "data_offset": 256, 00:18:44.365 "data_size": 7936 00:18:44.365 } 00:18:44.365 ] 00:18:44.365 }' 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.365 10:03:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.935 [2024-10-21 10:03:21.384016] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:44.935 [2024-10-21 10:03:21.384127] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:44.935 [2024-10-21 10:03:21.384264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.194 "name": "raid_bdev1", 00:18:45.194 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:45.194 "strip_size_kb": 0, 00:18:45.194 "state": "online", 00:18:45.194 "raid_level": "raid1", 00:18:45.194 "superblock": true, 00:18:45.194 "num_base_bdevs": 2, 00:18:45.194 "num_base_bdevs_discovered": 2, 00:18:45.194 "num_base_bdevs_operational": 2, 00:18:45.194 "base_bdevs_list": [ 00:18:45.194 { 00:18:45.194 "name": "spare", 00:18:45.194 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:45.194 "is_configured": true, 00:18:45.194 "data_offset": 256, 00:18:45.194 "data_size": 7936 00:18:45.194 }, 00:18:45.194 { 00:18:45.194 "name": "BaseBdev2", 00:18:45.194 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:45.194 "is_configured": true, 00:18:45.194 "data_offset": 256, 00:18:45.194 "data_size": 7936 00:18:45.194 } 00:18:45.194 ] 00:18:45.194 }' 00:18:45.194 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.507 "name": "raid_bdev1", 00:18:45.507 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:45.507 "strip_size_kb": 0, 00:18:45.507 "state": "online", 00:18:45.507 "raid_level": "raid1", 00:18:45.507 "superblock": true, 00:18:45.507 "num_base_bdevs": 2, 00:18:45.507 "num_base_bdevs_discovered": 2, 00:18:45.507 "num_base_bdevs_operational": 2, 00:18:45.507 "base_bdevs_list": [ 00:18:45.507 { 00:18:45.507 "name": "spare", 00:18:45.507 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:45.507 "is_configured": true, 00:18:45.507 "data_offset": 256, 00:18:45.507 "data_size": 7936 00:18:45.507 }, 00:18:45.507 { 00:18:45.507 "name": "BaseBdev2", 00:18:45.507 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:45.507 "is_configured": true, 00:18:45.507 "data_offset": 256, 00:18:45.507 "data_size": 7936 00:18:45.507 } 00:18:45.507 ] 00:18:45.507 }' 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.507 10:03:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.507 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.507 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.507 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.507 "name": "raid_bdev1", 00:18:45.507 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:45.507 "strip_size_kb": 0, 00:18:45.507 "state": "online", 00:18:45.507 "raid_level": "raid1", 00:18:45.507 "superblock": true, 00:18:45.507 "num_base_bdevs": 2, 00:18:45.507 "num_base_bdevs_discovered": 2, 00:18:45.507 "num_base_bdevs_operational": 2, 00:18:45.507 "base_bdevs_list": [ 00:18:45.507 { 00:18:45.507 "name": "spare", 00:18:45.507 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:45.507 "is_configured": true, 00:18:45.507 "data_offset": 256, 00:18:45.507 "data_size": 7936 00:18:45.507 }, 00:18:45.507 { 00:18:45.507 "name": "BaseBdev2", 00:18:45.507 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:45.507 "is_configured": true, 00:18:45.507 "data_offset": 256, 00:18:45.507 "data_size": 7936 00:18:45.507 } 00:18:45.507 ] 00:18:45.507 }' 00:18:45.507 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.507 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.106 [2024-10-21 10:03:22.495932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.106 [2024-10-21 10:03:22.496096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.106 [2024-10-21 10:03:22.496235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.106 [2024-10-21 10:03:22.496367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.106 [2024-10-21 10:03:22.496419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.106 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:46.366 /dev/nbd0 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.366 1+0 records in 00:18:46.366 1+0 records out 00:18:46.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467671 s, 8.8 MB/s 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.366 10:03:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:46.626 /dev/nbd1 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.626 1+0 records in 00:18:46.626 1+0 records out 00:18:46.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042392 s, 9.7 MB/s 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.626 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:46.885 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:46.885 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.885 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:46.885 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.885 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:46.885 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.885 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:47.144 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:47.144 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:47.144 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:47.144 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.144 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.144 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.144 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:47.144 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.144 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.144 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.404 [2024-10-21 10:03:23.907032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:47.404 [2024-10-21 10:03:23.907132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.404 [2024-10-21 10:03:23.907163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:47.404 [2024-10-21 10:03:23.907175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.404 [2024-10-21 10:03:23.909914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.404 [2024-10-21 10:03:23.909953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:47.404 [2024-10-21 10:03:23.910027] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:47.404 [2024-10-21 10:03:23.910095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.404 [2024-10-21 10:03:23.910275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.404 spare 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:47.404 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.405 10:03:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.665 [2024-10-21 10:03:24.010189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:18:47.665 [2024-10-21 10:03:24.010238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:47.665 [2024-10-21 10:03:24.010391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c18e0 00:18:47.665 [2024-10-21 10:03:24.010627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:18:47.665 [2024-10-21 10:03:24.010640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005f00 00:18:47.665 [2024-10-21 10:03:24.010842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.665 "name": "raid_bdev1", 00:18:47.665 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:47.665 "strip_size_kb": 0, 00:18:47.665 "state": "online", 00:18:47.665 "raid_level": "raid1", 00:18:47.665 "superblock": true, 00:18:47.665 "num_base_bdevs": 2, 00:18:47.665 "num_base_bdevs_discovered": 2, 00:18:47.665 "num_base_bdevs_operational": 2, 00:18:47.665 "base_bdevs_list": [ 00:18:47.665 { 00:18:47.665 "name": "spare", 00:18:47.665 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:47.665 "is_configured": true, 00:18:47.665 "data_offset": 256, 00:18:47.665 "data_size": 7936 00:18:47.665 }, 00:18:47.665 { 00:18:47.665 "name": "BaseBdev2", 00:18:47.665 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:47.665 "is_configured": true, 00:18:47.665 "data_offset": 256, 00:18:47.665 "data_size": 7936 00:18:47.665 } 00:18:47.665 ] 00:18:47.665 }' 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.665 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.925 "name": "raid_bdev1", 00:18:47.925 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:47.925 "strip_size_kb": 0, 00:18:47.925 "state": "online", 00:18:47.925 "raid_level": "raid1", 00:18:47.925 "superblock": true, 00:18:47.925 "num_base_bdevs": 2, 00:18:47.925 "num_base_bdevs_discovered": 2, 00:18:47.925 "num_base_bdevs_operational": 2, 00:18:47.925 "base_bdevs_list": [ 00:18:47.925 { 00:18:47.925 "name": "spare", 00:18:47.925 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:47.925 "is_configured": true, 00:18:47.925 "data_offset": 256, 00:18:47.925 "data_size": 7936 00:18:47.925 }, 00:18:47.925 { 00:18:47.925 "name": "BaseBdev2", 00:18:47.925 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:47.925 "is_configured": true, 00:18:47.925 "data_offset": 256, 00:18:47.925 "data_size": 7936 00:18:47.925 } 00:18:47.925 ] 00:18:47.925 }' 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.925 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.184 [2024-10-21 10:03:24.617917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.184 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.184 "name": "raid_bdev1", 00:18:48.184 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:48.184 "strip_size_kb": 0, 00:18:48.184 "state": "online", 00:18:48.184 "raid_level": "raid1", 00:18:48.184 "superblock": true, 00:18:48.184 "num_base_bdevs": 2, 00:18:48.184 "num_base_bdevs_discovered": 1, 00:18:48.184 "num_base_bdevs_operational": 1, 00:18:48.184 "base_bdevs_list": [ 00:18:48.184 { 00:18:48.184 "name": null, 00:18:48.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.185 "is_configured": false, 00:18:48.185 "data_offset": 0, 00:18:48.185 "data_size": 7936 00:18:48.185 }, 00:18:48.185 { 00:18:48.185 "name": "BaseBdev2", 00:18:48.185 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:48.185 "is_configured": true, 00:18:48.185 "data_offset": 256, 00:18:48.185 "data_size": 7936 00:18:48.185 } 00:18:48.185 ] 00:18:48.185 }' 00:18:48.185 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.185 10:03:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.753 10:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:48.753 10:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.753 10:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.753 [2024-10-21 10:03:25.057254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.753 [2024-10-21 10:03:25.057620] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:48.753 [2024-10-21 10:03:25.057698] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:48.753 [2024-10-21 10:03:25.057790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.753 [2024-10-21 10:03:25.076781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:18:48.753 10:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.753 10:03:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:48.753 [2024-10-21 10:03:25.079323] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.690 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.691 "name": "raid_bdev1", 00:18:49.691 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:49.691 "strip_size_kb": 0, 00:18:49.691 "state": "online", 00:18:49.691 "raid_level": "raid1", 00:18:49.691 "superblock": true, 00:18:49.691 "num_base_bdevs": 2, 00:18:49.691 "num_base_bdevs_discovered": 2, 00:18:49.691 "num_base_bdevs_operational": 2, 00:18:49.691 "process": { 00:18:49.691 "type": "rebuild", 00:18:49.691 "target": "spare", 00:18:49.691 "progress": { 00:18:49.691 "blocks": 2560, 00:18:49.691 "percent": 32 00:18:49.691 } 00:18:49.691 }, 00:18:49.691 "base_bdevs_list": [ 00:18:49.691 { 00:18:49.691 "name": "spare", 00:18:49.691 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:49.691 "is_configured": true, 00:18:49.691 "data_offset": 256, 00:18:49.691 "data_size": 7936 00:18:49.691 }, 00:18:49.691 { 00:18:49.691 "name": "BaseBdev2", 00:18:49.691 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:49.691 "is_configured": true, 00:18:49.691 "data_offset": 256, 00:18:49.691 "data_size": 7936 00:18:49.691 } 00:18:49.691 ] 00:18:49.691 }' 00:18:49.691 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.691 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.691 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.691 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.691 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.691 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.691 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.691 [2024-10-21 10:03:26.239333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.948 [2024-10-21 10:03:26.288873] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.948 [2024-10-21 10:03:26.289034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.948 [2024-10-21 10:03:26.289079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.948 [2024-10-21 10:03:26.289111] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.948 "name": "raid_bdev1", 00:18:49.948 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:49.948 "strip_size_kb": 0, 00:18:49.948 "state": "online", 00:18:49.948 "raid_level": "raid1", 00:18:49.948 "superblock": true, 00:18:49.948 "num_base_bdevs": 2, 00:18:49.948 "num_base_bdevs_discovered": 1, 00:18:49.948 "num_base_bdevs_operational": 1, 00:18:49.948 "base_bdevs_list": [ 00:18:49.948 { 00:18:49.948 "name": null, 00:18:49.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.948 "is_configured": false, 00:18:49.948 "data_offset": 0, 00:18:49.948 "data_size": 7936 00:18:49.948 }, 00:18:49.948 { 00:18:49.948 "name": "BaseBdev2", 00:18:49.948 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:49.948 "is_configured": true, 00:18:49.948 "data_offset": 256, 00:18:49.948 "data_size": 7936 00:18:49.948 } 00:18:49.948 ] 00:18:49.948 }' 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.948 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.206 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:50.206 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.206 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.464 [2024-10-21 10:03:26.808951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:50.464 [2024-10-21 10:03:26.809047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.464 [2024-10-21 10:03:26.809080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:50.464 [2024-10-21 10:03:26.809095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.464 [2024-10-21 10:03:26.809437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.464 [2024-10-21 10:03:26.809459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:50.464 [2024-10-21 10:03:26.809534] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:50.464 [2024-10-21 10:03:26.809552] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.464 [2024-10-21 10:03:26.809565] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:50.464 [2024-10-21 10:03:26.809615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.464 [2024-10-21 10:03:26.828697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:18:50.464 spare 00:18:50.464 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.464 10:03:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:50.464 [2024-10-21 10:03:26.831263] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.399 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.399 "name": "raid_bdev1", 00:18:51.400 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:51.400 "strip_size_kb": 0, 00:18:51.400 "state": "online", 00:18:51.400 "raid_level": "raid1", 00:18:51.400 "superblock": true, 00:18:51.400 "num_base_bdevs": 2, 00:18:51.400 "num_base_bdevs_discovered": 2, 00:18:51.400 "num_base_bdevs_operational": 2, 00:18:51.400 "process": { 00:18:51.400 "type": "rebuild", 00:18:51.400 "target": "spare", 00:18:51.400 "progress": { 00:18:51.400 "blocks": 2560, 00:18:51.400 "percent": 32 00:18:51.400 } 00:18:51.400 }, 00:18:51.400 "base_bdevs_list": [ 00:18:51.400 { 00:18:51.400 "name": "spare", 00:18:51.400 "uuid": "e8282fcc-dc6a-59da-ba23-813ed559503f", 00:18:51.400 "is_configured": true, 00:18:51.400 "data_offset": 256, 00:18:51.400 "data_size": 7936 00:18:51.400 }, 00:18:51.400 { 00:18:51.400 "name": "BaseBdev2", 00:18:51.400 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:51.400 "is_configured": true, 00:18:51.400 "data_offset": 256, 00:18:51.400 "data_size": 7936 00:18:51.400 } 00:18:51.400 ] 00:18:51.400 }' 00:18:51.400 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.400 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.400 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.400 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.400 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:51.400 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.400 10:03:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.400 [2024-10-21 10:03:27.991334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.658 [2024-10-21 10:03:28.040951] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.658 [2024-10-21 10:03:28.041157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.659 [2024-10-21 10:03:28.041182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.659 [2024-10-21 10:03:28.041192] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.659 "name": "raid_bdev1", 00:18:51.659 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:51.659 "strip_size_kb": 0, 00:18:51.659 "state": "online", 00:18:51.659 "raid_level": "raid1", 00:18:51.659 "superblock": true, 00:18:51.659 "num_base_bdevs": 2, 00:18:51.659 "num_base_bdevs_discovered": 1, 00:18:51.659 "num_base_bdevs_operational": 1, 00:18:51.659 "base_bdevs_list": [ 00:18:51.659 { 00:18:51.659 "name": null, 00:18:51.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.659 "is_configured": false, 00:18:51.659 "data_offset": 0, 00:18:51.659 "data_size": 7936 00:18:51.659 }, 00:18:51.659 { 00:18:51.659 "name": "BaseBdev2", 00:18:51.659 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:51.659 "is_configured": true, 00:18:51.659 "data_offset": 256, 00:18:51.659 "data_size": 7936 00:18:51.659 } 00:18:51.659 ] 00:18:51.659 }' 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.659 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.263 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.263 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.263 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.263 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.263 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.263 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.263 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.263 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.263 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.263 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.264 "name": "raid_bdev1", 00:18:52.264 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:52.264 "strip_size_kb": 0, 00:18:52.264 "state": "online", 00:18:52.264 "raid_level": "raid1", 00:18:52.264 "superblock": true, 00:18:52.264 "num_base_bdevs": 2, 00:18:52.264 "num_base_bdevs_discovered": 1, 00:18:52.264 "num_base_bdevs_operational": 1, 00:18:52.264 "base_bdevs_list": [ 00:18:52.264 { 00:18:52.264 "name": null, 00:18:52.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.264 "is_configured": false, 00:18:52.264 "data_offset": 0, 00:18:52.264 "data_size": 7936 00:18:52.264 }, 00:18:52.264 { 00:18:52.264 "name": "BaseBdev2", 00:18:52.264 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:52.264 "is_configured": true, 00:18:52.264 "data_offset": 256, 00:18:52.264 "data_size": 7936 00:18:52.264 } 00:18:52.264 ] 00:18:52.264 }' 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.264 [2024-10-21 10:03:28.689004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:52.264 [2024-10-21 10:03:28.689105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.264 [2024-10-21 10:03:28.689141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:52.264 [2024-10-21 10:03:28.689153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.264 [2024-10-21 10:03:28.689492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.264 [2024-10-21 10:03:28.689509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:52.264 [2024-10-21 10:03:28.689594] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:52.264 [2024-10-21 10:03:28.689612] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:52.264 [2024-10-21 10:03:28.689625] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:52.264 [2024-10-21 10:03:28.689643] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:52.264 BaseBdev1 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.264 10:03:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:53.212 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:53.212 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.213 "name": "raid_bdev1", 00:18:53.213 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:53.213 "strip_size_kb": 0, 00:18:53.213 "state": "online", 00:18:53.213 "raid_level": "raid1", 00:18:53.213 "superblock": true, 00:18:53.213 "num_base_bdevs": 2, 00:18:53.213 "num_base_bdevs_discovered": 1, 00:18:53.213 "num_base_bdevs_operational": 1, 00:18:53.213 "base_bdevs_list": [ 00:18:53.213 { 00:18:53.213 "name": null, 00:18:53.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.213 "is_configured": false, 00:18:53.213 "data_offset": 0, 00:18:53.213 "data_size": 7936 00:18:53.213 }, 00:18:53.213 { 00:18:53.213 "name": "BaseBdev2", 00:18:53.213 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:53.213 "is_configured": true, 00:18:53.213 "data_offset": 256, 00:18:53.213 "data_size": 7936 00:18:53.213 } 00:18:53.213 ] 00:18:53.213 }' 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.213 10:03:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.782 "name": "raid_bdev1", 00:18:53.782 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:53.782 "strip_size_kb": 0, 00:18:53.782 "state": "online", 00:18:53.782 "raid_level": "raid1", 00:18:53.782 "superblock": true, 00:18:53.782 "num_base_bdevs": 2, 00:18:53.782 "num_base_bdevs_discovered": 1, 00:18:53.782 "num_base_bdevs_operational": 1, 00:18:53.782 "base_bdevs_list": [ 00:18:53.782 { 00:18:53.782 "name": null, 00:18:53.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.782 "is_configured": false, 00:18:53.782 "data_offset": 0, 00:18:53.782 "data_size": 7936 00:18:53.782 }, 00:18:53.782 { 00:18:53.782 "name": "BaseBdev2", 00:18:53.782 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:53.782 "is_configured": true, 00:18:53.782 "data_offset": 256, 00:18:53.782 "data_size": 7936 00:18:53.782 } 00:18:53.782 ] 00:18:53.782 }' 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.782 [2024-10-21 10:03:30.330468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.782 [2024-10-21 10:03:30.330742] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:53.782 [2024-10-21 10:03:30.330764] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:53.782 request: 00:18:53.782 { 00:18:53.782 "base_bdev": "BaseBdev1", 00:18:53.782 "raid_bdev": "raid_bdev1", 00:18:53.782 "method": "bdev_raid_add_base_bdev", 00:18:53.782 "req_id": 1 00:18:53.782 } 00:18:53.782 Got JSON-RPC error response 00:18:53.782 response: 00:18:53.782 { 00:18:53.782 "code": -22, 00:18:53.782 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:53.782 } 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:53.782 10:03:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.164 "name": "raid_bdev1", 00:18:55.164 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:55.164 "strip_size_kb": 0, 00:18:55.164 "state": "online", 00:18:55.164 "raid_level": "raid1", 00:18:55.164 "superblock": true, 00:18:55.164 "num_base_bdevs": 2, 00:18:55.164 "num_base_bdevs_discovered": 1, 00:18:55.164 "num_base_bdevs_operational": 1, 00:18:55.164 "base_bdevs_list": [ 00:18:55.164 { 00:18:55.164 "name": null, 00:18:55.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.164 "is_configured": false, 00:18:55.164 "data_offset": 0, 00:18:55.164 "data_size": 7936 00:18:55.164 }, 00:18:55.164 { 00:18:55.164 "name": "BaseBdev2", 00:18:55.164 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:55.164 "is_configured": true, 00:18:55.164 "data_offset": 256, 00:18:55.164 "data_size": 7936 00:18:55.164 } 00:18:55.164 ] 00:18:55.164 }' 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.164 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.424 "name": "raid_bdev1", 00:18:55.424 "uuid": "b173df8c-9658-4ab6-aa9d-3ee67d2c5d79", 00:18:55.424 "strip_size_kb": 0, 00:18:55.424 "state": "online", 00:18:55.424 "raid_level": "raid1", 00:18:55.424 "superblock": true, 00:18:55.424 "num_base_bdevs": 2, 00:18:55.424 "num_base_bdevs_discovered": 1, 00:18:55.424 "num_base_bdevs_operational": 1, 00:18:55.424 "base_bdevs_list": [ 00:18:55.424 { 00:18:55.424 "name": null, 00:18:55.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.424 "is_configured": false, 00:18:55.424 "data_offset": 0, 00:18:55.424 "data_size": 7936 00:18:55.424 }, 00:18:55.424 { 00:18:55.424 "name": "BaseBdev2", 00:18:55.424 "uuid": "00001242-660f-59ed-b8f6-0506c839d8b2", 00:18:55.424 "is_configured": true, 00:18:55.424 "data_offset": 256, 00:18:55.424 "data_size": 7936 00:18:55.424 } 00:18:55.424 ] 00:18:55.424 }' 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87475 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87475 ']' 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87475 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87475 00:18:55.424 killing process with pid 87475 00:18:55.424 Received shutdown signal, test time was about 60.000000 seconds 00:18:55.424 00:18:55.424 Latency(us) 00:18:55.424 [2024-10-21T10:03:32.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.424 [2024-10-21T10:03:32.019Z] =================================================================================================================== 00:18:55.424 [2024-10-21T10:03:32.019Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:55.424 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87475' 00:18:55.425 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87475 00:18:55.425 [2024-10-21 10:03:31.953589] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:55.425 10:03:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87475 00:18:55.425 [2024-10-21 10:03:31.953795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.425 [2024-10-21 10:03:31.953868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.425 [2024-10-21 10:03:31.953883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state offline 00:18:55.995 [2024-10-21 10:03:32.390971] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.376 10:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:57.376 00:18:57.376 real 0m21.002s 00:18:57.376 user 0m27.215s 00:18:57.376 sys 0m2.748s 00:18:57.376 10:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:57.376 10:03:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.376 ************************************ 00:18:57.376 END TEST raid_rebuild_test_sb_md_separate 00:18:57.376 ************************************ 00:18:57.376 10:03:33 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:57.376 10:03:33 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:57.376 10:03:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:57.376 10:03:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:57.376 10:03:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.376 ************************************ 00:18:57.376 START TEST raid_state_function_test_sb_md_interleaved 00:18:57.376 ************************************ 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.376 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88171 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88171' 00:18:57.377 Process raid pid: 88171 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88171 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88171 ']' 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:57.377 10:03:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.637 [2024-10-21 10:03:34.053977] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:18:57.637 [2024-10-21 10:03:34.054221] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.637 [2024-10-21 10:03:34.228537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.897 [2024-10-21 10:03:34.396334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.158 [2024-10-21 10:03:34.704872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.158 [2024-10-21 10:03:34.704922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.418 [2024-10-21 10:03:34.925480] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.418 [2024-10-21 10:03:34.925895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.418 [2024-10-21 10:03:34.925919] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.418 [2024-10-21 10:03:34.925998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.418 "name": "Existed_Raid", 00:18:58.418 "uuid": "b27aff6c-e71a-4f03-819e-95c89870a37d", 00:18:58.418 "strip_size_kb": 0, 00:18:58.418 "state": "configuring", 00:18:58.418 "raid_level": "raid1", 00:18:58.418 "superblock": true, 00:18:58.418 "num_base_bdevs": 2, 00:18:58.418 "num_base_bdevs_discovered": 0, 00:18:58.418 "num_base_bdevs_operational": 2, 00:18:58.418 "base_bdevs_list": [ 00:18:58.418 { 00:18:58.418 "name": "BaseBdev1", 00:18:58.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.418 "is_configured": false, 00:18:58.418 "data_offset": 0, 00:18:58.418 "data_size": 0 00:18:58.418 }, 00:18:58.418 { 00:18:58.418 "name": "BaseBdev2", 00:18:58.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.418 "is_configured": false, 00:18:58.418 "data_offset": 0, 00:18:58.418 "data_size": 0 00:18:58.418 } 00:18:58.418 ] 00:18:58.418 }' 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.418 10:03:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.987 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:58.987 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.987 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.987 [2024-10-21 10:03:35.408698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.987 [2024-10-21 10:03:35.408853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name Existed_Raid, state configuring 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.988 [2024-10-21 10:03:35.420685] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.988 [2024-10-21 10:03:35.421223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.988 [2024-10-21 10:03:35.421295] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.988 [2024-10-21 10:03:35.421402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.988 [2024-10-21 10:03:35.490797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.988 BaseBdev1 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.988 [ 00:18:58.988 { 00:18:58.988 "name": "BaseBdev1", 00:18:58.988 "aliases": [ 00:18:58.988 "9c713175-f576-412d-aa24-36ccfbcc4da1" 00:18:58.988 ], 00:18:58.988 "product_name": "Malloc disk", 00:18:58.988 "block_size": 4128, 00:18:58.988 "num_blocks": 8192, 00:18:58.988 "uuid": "9c713175-f576-412d-aa24-36ccfbcc4da1", 00:18:58.988 "md_size": 32, 00:18:58.988 "md_interleave": true, 00:18:58.988 "dif_type": 0, 00:18:58.988 "assigned_rate_limits": { 00:18:58.988 "rw_ios_per_sec": 0, 00:18:58.988 "rw_mbytes_per_sec": 0, 00:18:58.988 "r_mbytes_per_sec": 0, 00:18:58.988 "w_mbytes_per_sec": 0 00:18:58.988 }, 00:18:58.988 "claimed": true, 00:18:58.988 "claim_type": "exclusive_write", 00:18:58.988 "zoned": false, 00:18:58.988 "supported_io_types": { 00:18:58.988 "read": true, 00:18:58.988 "write": true, 00:18:58.988 "unmap": true, 00:18:58.988 "flush": true, 00:18:58.988 "reset": true, 00:18:58.988 "nvme_admin": false, 00:18:58.988 "nvme_io": false, 00:18:58.988 "nvme_io_md": false, 00:18:58.988 "write_zeroes": true, 00:18:58.988 "zcopy": true, 00:18:58.988 "get_zone_info": false, 00:18:58.988 "zone_management": false, 00:18:58.988 "zone_append": false, 00:18:58.988 "compare": false, 00:18:58.988 "compare_and_write": false, 00:18:58.988 "abort": true, 00:18:58.988 "seek_hole": false, 00:18:58.988 "seek_data": false, 00:18:58.988 "copy": true, 00:18:58.988 "nvme_iov_md": false 00:18:58.988 }, 00:18:58.988 "memory_domains": [ 00:18:58.988 { 00:18:58.988 "dma_device_id": "system", 00:18:58.988 "dma_device_type": 1 00:18:58.988 }, 00:18:58.988 { 00:18:58.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.988 "dma_device_type": 2 00:18:58.988 } 00:18:58.988 ], 00:18:58.988 "driver_specific": {} 00:18:58.988 } 00:18:58.988 ] 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.988 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.248 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.248 "name": "Existed_Raid", 00:18:59.248 "uuid": "f7d7730e-5ac3-4b3e-bdee-9c55cb035fa2", 00:18:59.248 "strip_size_kb": 0, 00:18:59.248 "state": "configuring", 00:18:59.248 "raid_level": "raid1", 00:18:59.248 "superblock": true, 00:18:59.248 "num_base_bdevs": 2, 00:18:59.248 "num_base_bdevs_discovered": 1, 00:18:59.248 "num_base_bdevs_operational": 2, 00:18:59.248 "base_bdevs_list": [ 00:18:59.248 { 00:18:59.248 "name": "BaseBdev1", 00:18:59.248 "uuid": "9c713175-f576-412d-aa24-36ccfbcc4da1", 00:18:59.248 "is_configured": true, 00:18:59.248 "data_offset": 256, 00:18:59.248 "data_size": 7936 00:18:59.248 }, 00:18:59.248 { 00:18:59.248 "name": "BaseBdev2", 00:18:59.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.248 "is_configured": false, 00:18:59.248 "data_offset": 0, 00:18:59.248 "data_size": 0 00:18:59.248 } 00:18:59.248 ] 00:18:59.248 }' 00:18:59.248 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.248 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.508 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:59.508 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.508 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.508 [2024-10-21 10:03:35.986153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.508 [2024-10-21 10:03:35.986312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name Existed_Raid, state configuring 00:18:59.508 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.508 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:59.508 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.508 10:03:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.508 [2024-10-21 10:03:35.998174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.508 [2024-10-21 10:03:36.000726] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.508 [2024-10-21 10:03:36.001031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.508 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.509 "name": "Existed_Raid", 00:18:59.509 "uuid": "2aad08d7-fe6c-4250-b57c-542f6b9d7035", 00:18:59.509 "strip_size_kb": 0, 00:18:59.509 "state": "configuring", 00:18:59.509 "raid_level": "raid1", 00:18:59.509 "superblock": true, 00:18:59.509 "num_base_bdevs": 2, 00:18:59.509 "num_base_bdevs_discovered": 1, 00:18:59.509 "num_base_bdevs_operational": 2, 00:18:59.509 "base_bdevs_list": [ 00:18:59.509 { 00:18:59.509 "name": "BaseBdev1", 00:18:59.509 "uuid": "9c713175-f576-412d-aa24-36ccfbcc4da1", 00:18:59.509 "is_configured": true, 00:18:59.509 "data_offset": 256, 00:18:59.509 "data_size": 7936 00:18:59.509 }, 00:18:59.509 { 00:18:59.509 "name": "BaseBdev2", 00:18:59.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.509 "is_configured": false, 00:18:59.509 "data_offset": 0, 00:18:59.509 "data_size": 0 00:18:59.509 } 00:18:59.509 ] 00:18:59.509 }' 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.509 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.079 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:00.079 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.079 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.079 [2024-10-21 10:03:36.545471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.079 BaseBdev2 00:19:00.079 [2024-10-21 10:03:36.545913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:19:00.079 [2024-10-21 10:03:36.545935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:00.079 [2024-10-21 10:03:36.546056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:00.080 [2024-10-21 10:03:36.546152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:19:00.080 [2024-10-21 10:03:36.546165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006280 00:19:00.080 [2024-10-21 10:03:36.546236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.080 [ 00:19:00.080 { 00:19:00.080 "name": "BaseBdev2", 00:19:00.080 "aliases": [ 00:19:00.080 "99c0331e-3be3-4f01-acaa-82b508a55b49" 00:19:00.080 ], 00:19:00.080 "product_name": "Malloc disk", 00:19:00.080 "block_size": 4128, 00:19:00.080 "num_blocks": 8192, 00:19:00.080 "uuid": "99c0331e-3be3-4f01-acaa-82b508a55b49", 00:19:00.080 "md_size": 32, 00:19:00.080 "md_interleave": true, 00:19:00.080 "dif_type": 0, 00:19:00.080 "assigned_rate_limits": { 00:19:00.080 "rw_ios_per_sec": 0, 00:19:00.080 "rw_mbytes_per_sec": 0, 00:19:00.080 "r_mbytes_per_sec": 0, 00:19:00.080 "w_mbytes_per_sec": 0 00:19:00.080 }, 00:19:00.080 "claimed": true, 00:19:00.080 "claim_type": "exclusive_write", 00:19:00.080 "zoned": false, 00:19:00.080 "supported_io_types": { 00:19:00.080 "read": true, 00:19:00.080 "write": true, 00:19:00.080 "unmap": true, 00:19:00.080 "flush": true, 00:19:00.080 "reset": true, 00:19:00.080 "nvme_admin": false, 00:19:00.080 "nvme_io": false, 00:19:00.080 "nvme_io_md": false, 00:19:00.080 "write_zeroes": true, 00:19:00.080 "zcopy": true, 00:19:00.080 "get_zone_info": false, 00:19:00.080 "zone_management": false, 00:19:00.080 "zone_append": false, 00:19:00.080 "compare": false, 00:19:00.080 "compare_and_write": false, 00:19:00.080 "abort": true, 00:19:00.080 "seek_hole": false, 00:19:00.080 "seek_data": false, 00:19:00.080 "copy": true, 00:19:00.080 "nvme_iov_md": false 00:19:00.080 }, 00:19:00.080 "memory_domains": [ 00:19:00.080 { 00:19:00.080 "dma_device_id": "system", 00:19:00.080 "dma_device_type": 1 00:19:00.080 }, 00:19:00.080 { 00:19:00.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.080 "dma_device_type": 2 00:19:00.080 } 00:19:00.080 ], 00:19:00.080 "driver_specific": {} 00:19:00.080 } 00:19:00.080 ] 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.080 "name": "Existed_Raid", 00:19:00.080 "uuid": "2aad08d7-fe6c-4250-b57c-542f6b9d7035", 00:19:00.080 "strip_size_kb": 0, 00:19:00.080 "state": "online", 00:19:00.080 "raid_level": "raid1", 00:19:00.080 "superblock": true, 00:19:00.080 "num_base_bdevs": 2, 00:19:00.080 "num_base_bdevs_discovered": 2, 00:19:00.080 "num_base_bdevs_operational": 2, 00:19:00.080 "base_bdevs_list": [ 00:19:00.080 { 00:19:00.080 "name": "BaseBdev1", 00:19:00.080 "uuid": "9c713175-f576-412d-aa24-36ccfbcc4da1", 00:19:00.080 "is_configured": true, 00:19:00.080 "data_offset": 256, 00:19:00.080 "data_size": 7936 00:19:00.080 }, 00:19:00.080 { 00:19:00.080 "name": "BaseBdev2", 00:19:00.080 "uuid": "99c0331e-3be3-4f01-acaa-82b508a55b49", 00:19:00.080 "is_configured": true, 00:19:00.080 "data_offset": 256, 00:19:00.080 "data_size": 7936 00:19:00.080 } 00:19:00.080 ] 00:19:00.080 }' 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.080 10:03:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.709 [2024-10-21 10:03:37.029154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:00.709 "name": "Existed_Raid", 00:19:00.709 "aliases": [ 00:19:00.709 "2aad08d7-fe6c-4250-b57c-542f6b9d7035" 00:19:00.709 ], 00:19:00.709 "product_name": "Raid Volume", 00:19:00.709 "block_size": 4128, 00:19:00.709 "num_blocks": 7936, 00:19:00.709 "uuid": "2aad08d7-fe6c-4250-b57c-542f6b9d7035", 00:19:00.709 "md_size": 32, 00:19:00.709 "md_interleave": true, 00:19:00.709 "dif_type": 0, 00:19:00.709 "assigned_rate_limits": { 00:19:00.709 "rw_ios_per_sec": 0, 00:19:00.709 "rw_mbytes_per_sec": 0, 00:19:00.709 "r_mbytes_per_sec": 0, 00:19:00.709 "w_mbytes_per_sec": 0 00:19:00.709 }, 00:19:00.709 "claimed": false, 00:19:00.709 "zoned": false, 00:19:00.709 "supported_io_types": { 00:19:00.709 "read": true, 00:19:00.709 "write": true, 00:19:00.709 "unmap": false, 00:19:00.709 "flush": false, 00:19:00.709 "reset": true, 00:19:00.709 "nvme_admin": false, 00:19:00.709 "nvme_io": false, 00:19:00.709 "nvme_io_md": false, 00:19:00.709 "write_zeroes": true, 00:19:00.709 "zcopy": false, 00:19:00.709 "get_zone_info": false, 00:19:00.709 "zone_management": false, 00:19:00.709 "zone_append": false, 00:19:00.709 "compare": false, 00:19:00.709 "compare_and_write": false, 00:19:00.709 "abort": false, 00:19:00.709 "seek_hole": false, 00:19:00.709 "seek_data": false, 00:19:00.709 "copy": false, 00:19:00.709 "nvme_iov_md": false 00:19:00.709 }, 00:19:00.709 "memory_domains": [ 00:19:00.709 { 00:19:00.709 "dma_device_id": "system", 00:19:00.709 "dma_device_type": 1 00:19:00.709 }, 00:19:00.709 { 00:19:00.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.709 "dma_device_type": 2 00:19:00.709 }, 00:19:00.709 { 00:19:00.709 "dma_device_id": "system", 00:19:00.709 "dma_device_type": 1 00:19:00.709 }, 00:19:00.709 { 00:19:00.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.709 "dma_device_type": 2 00:19:00.709 } 00:19:00.709 ], 00:19:00.709 "driver_specific": { 00:19:00.709 "raid": { 00:19:00.709 "uuid": "2aad08d7-fe6c-4250-b57c-542f6b9d7035", 00:19:00.709 "strip_size_kb": 0, 00:19:00.709 "state": "online", 00:19:00.709 "raid_level": "raid1", 00:19:00.709 "superblock": true, 00:19:00.709 "num_base_bdevs": 2, 00:19:00.709 "num_base_bdevs_discovered": 2, 00:19:00.709 "num_base_bdevs_operational": 2, 00:19:00.709 "base_bdevs_list": [ 00:19:00.709 { 00:19:00.709 "name": "BaseBdev1", 00:19:00.709 "uuid": "9c713175-f576-412d-aa24-36ccfbcc4da1", 00:19:00.709 "is_configured": true, 00:19:00.709 "data_offset": 256, 00:19:00.709 "data_size": 7936 00:19:00.709 }, 00:19:00.709 { 00:19:00.709 "name": "BaseBdev2", 00:19:00.709 "uuid": "99c0331e-3be3-4f01-acaa-82b508a55b49", 00:19:00.709 "is_configured": true, 00:19:00.709 "data_offset": 256, 00:19:00.709 "data_size": 7936 00:19:00.709 } 00:19:00.709 ] 00:19:00.709 } 00:19:00.709 } 00:19:00.709 }' 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:00.709 BaseBdev2' 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.709 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.709 [2024-10-21 10:03:37.244591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.970 "name": "Existed_Raid", 00:19:00.970 "uuid": "2aad08d7-fe6c-4250-b57c-542f6b9d7035", 00:19:00.970 "strip_size_kb": 0, 00:19:00.970 "state": "online", 00:19:00.970 "raid_level": "raid1", 00:19:00.970 "superblock": true, 00:19:00.970 "num_base_bdevs": 2, 00:19:00.970 "num_base_bdevs_discovered": 1, 00:19:00.970 "num_base_bdevs_operational": 1, 00:19:00.970 "base_bdevs_list": [ 00:19:00.970 { 00:19:00.970 "name": null, 00:19:00.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.970 "is_configured": false, 00:19:00.970 "data_offset": 0, 00:19:00.970 "data_size": 7936 00:19:00.970 }, 00:19:00.970 { 00:19:00.970 "name": "BaseBdev2", 00:19:00.970 "uuid": "99c0331e-3be3-4f01-acaa-82b508a55b49", 00:19:00.970 "is_configured": true, 00:19:00.970 "data_offset": 256, 00:19:00.970 "data_size": 7936 00:19:00.970 } 00:19:00.970 ] 00:19:00.970 }' 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.970 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.539 [2024-10-21 10:03:37.869182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:01.539 [2024-10-21 10:03:37.869451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.539 [2024-10-21 10:03:37.998317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.539 [2024-10-21 10:03:37.998388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.539 [2024-10-21 10:03:37.998404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state offline 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:01.539 10:03:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88171 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88171 ']' 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88171 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:01.539 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:01.540 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88171 00:19:01.540 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:01.540 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:01.540 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88171' 00:19:01.540 killing process with pid 88171 00:19:01.540 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88171 00:19:01.540 [2024-10-21 10:03:38.091963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.540 10:03:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88171 00:19:01.540 [2024-10-21 10:03:38.114387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.447 10:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:03.447 00:19:03.447 real 0m5.667s 00:19:03.447 user 0m7.909s 00:19:03.447 sys 0m0.966s 00:19:03.447 10:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.447 10:03:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.447 ************************************ 00:19:03.447 END TEST raid_state_function_test_sb_md_interleaved 00:19:03.447 ************************************ 00:19:03.447 10:03:39 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:03.447 10:03:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:03.447 10:03:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.447 10:03:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.447 ************************************ 00:19:03.447 START TEST raid_superblock_test_md_interleaved 00:19:03.447 ************************************ 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88429 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88429 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88429 ']' 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.448 10:03:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.448 [2024-10-21 10:03:39.775025] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:03.448 [2024-10-21 10:03:39.775175] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88429 ] 00:19:03.448 [2024-10-21 10:03:39.943287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.708 [2024-10-21 10:03:40.113078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.968 [2024-10-21 10:03:40.422486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.968 [2024-10-21 10:03:40.422589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.228 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.228 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:19:04.228 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:04.228 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.228 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:04.228 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.229 malloc1 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.229 [2024-10-21 10:03:40.771463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.229 [2024-10-21 10:03:40.771663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.229 [2024-10-21 10:03:40.771718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:19:04.229 [2024-10-21 10:03:40.771759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.229 [2024-10-21 10:03:40.774320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.229 [2024-10-21 10:03:40.774419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.229 pt1 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.229 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.488 malloc2 00:19:04.488 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.488 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.488 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.488 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.488 [2024-10-21 10:03:40.848999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.488 [2024-10-21 10:03:40.849176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.488 [2024-10-21 10:03:40.849226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:04.488 [2024-10-21 10:03:40.849265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.488 [2024-10-21 10:03:40.851788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.488 [2024-10-21 10:03:40.851874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.488 pt2 00:19:04.488 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.488 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:04.488 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:04.488 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:04.488 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.488 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.488 [2024-10-21 10:03:40.861051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:04.488 [2024-10-21 10:03:40.863561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.488 [2024-10-21 10:03:40.863866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:19:04.488 [2024-10-21 10:03:40.863923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:04.488 [2024-10-21 10:03:40.864040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:04.488 [2024-10-21 10:03:40.864169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:19:04.488 [2024-10-21 10:03:40.864219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:19:04.489 [2024-10-21 10:03:40.864346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.489 "name": "raid_bdev1", 00:19:04.489 "uuid": "44c99a2f-d034-46ec-83a1-58bb3b8608da", 00:19:04.489 "strip_size_kb": 0, 00:19:04.489 "state": "online", 00:19:04.489 "raid_level": "raid1", 00:19:04.489 "superblock": true, 00:19:04.489 "num_base_bdevs": 2, 00:19:04.489 "num_base_bdevs_discovered": 2, 00:19:04.489 "num_base_bdevs_operational": 2, 00:19:04.489 "base_bdevs_list": [ 00:19:04.489 { 00:19:04.489 "name": "pt1", 00:19:04.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.489 "is_configured": true, 00:19:04.489 "data_offset": 256, 00:19:04.489 "data_size": 7936 00:19:04.489 }, 00:19:04.489 { 00:19:04.489 "name": "pt2", 00:19:04.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.489 "is_configured": true, 00:19:04.489 "data_offset": 256, 00:19:04.489 "data_size": 7936 00:19:04.489 } 00:19:04.489 ] 00:19:04.489 }' 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.489 10:03:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.059 [2024-10-21 10:03:41.360634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.059 "name": "raid_bdev1", 00:19:05.059 "aliases": [ 00:19:05.059 "44c99a2f-d034-46ec-83a1-58bb3b8608da" 00:19:05.059 ], 00:19:05.059 "product_name": "Raid Volume", 00:19:05.059 "block_size": 4128, 00:19:05.059 "num_blocks": 7936, 00:19:05.059 "uuid": "44c99a2f-d034-46ec-83a1-58bb3b8608da", 00:19:05.059 "md_size": 32, 00:19:05.059 "md_interleave": true, 00:19:05.059 "dif_type": 0, 00:19:05.059 "assigned_rate_limits": { 00:19:05.059 "rw_ios_per_sec": 0, 00:19:05.059 "rw_mbytes_per_sec": 0, 00:19:05.059 "r_mbytes_per_sec": 0, 00:19:05.059 "w_mbytes_per_sec": 0 00:19:05.059 }, 00:19:05.059 "claimed": false, 00:19:05.059 "zoned": false, 00:19:05.059 "supported_io_types": { 00:19:05.059 "read": true, 00:19:05.059 "write": true, 00:19:05.059 "unmap": false, 00:19:05.059 "flush": false, 00:19:05.059 "reset": true, 00:19:05.059 "nvme_admin": false, 00:19:05.059 "nvme_io": false, 00:19:05.059 "nvme_io_md": false, 00:19:05.059 "write_zeroes": true, 00:19:05.059 "zcopy": false, 00:19:05.059 "get_zone_info": false, 00:19:05.059 "zone_management": false, 00:19:05.059 "zone_append": false, 00:19:05.059 "compare": false, 00:19:05.059 "compare_and_write": false, 00:19:05.059 "abort": false, 00:19:05.059 "seek_hole": false, 00:19:05.059 "seek_data": false, 00:19:05.059 "copy": false, 00:19:05.059 "nvme_iov_md": false 00:19:05.059 }, 00:19:05.059 "memory_domains": [ 00:19:05.059 { 00:19:05.059 "dma_device_id": "system", 00:19:05.059 "dma_device_type": 1 00:19:05.059 }, 00:19:05.059 { 00:19:05.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.059 "dma_device_type": 2 00:19:05.059 }, 00:19:05.059 { 00:19:05.059 "dma_device_id": "system", 00:19:05.059 "dma_device_type": 1 00:19:05.059 }, 00:19:05.059 { 00:19:05.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.059 "dma_device_type": 2 00:19:05.059 } 00:19:05.059 ], 00:19:05.059 "driver_specific": { 00:19:05.059 "raid": { 00:19:05.059 "uuid": "44c99a2f-d034-46ec-83a1-58bb3b8608da", 00:19:05.059 "strip_size_kb": 0, 00:19:05.059 "state": "online", 00:19:05.059 "raid_level": "raid1", 00:19:05.059 "superblock": true, 00:19:05.059 "num_base_bdevs": 2, 00:19:05.059 "num_base_bdevs_discovered": 2, 00:19:05.059 "num_base_bdevs_operational": 2, 00:19:05.059 "base_bdevs_list": [ 00:19:05.059 { 00:19:05.059 "name": "pt1", 00:19:05.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.059 "is_configured": true, 00:19:05.059 "data_offset": 256, 00:19:05.059 "data_size": 7936 00:19:05.059 }, 00:19:05.059 { 00:19:05.059 "name": "pt2", 00:19:05.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.059 "is_configured": true, 00:19:05.059 "data_offset": 256, 00:19:05.059 "data_size": 7936 00:19:05.059 } 00:19:05.059 ] 00:19:05.059 } 00:19:05.059 } 00:19:05.059 }' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:05.059 pt2' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:05.059 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.060 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.060 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.060 [2024-10-21 10:03:41.612097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.060 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.060 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=44c99a2f-d034-46ec-83a1-58bb3b8608da 00:19:05.060 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 44c99a2f-d034-46ec-83a1-58bb3b8608da ']' 00:19:05.060 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:05.060 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.060 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.322 [2024-10-21 10:03:41.659663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.322 [2024-10-21 10:03:41.659797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.322 [2024-10-21 10:03:41.659942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.322 [2024-10-21 10:03:41.660055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.322 [2024-10-21 10:03:41.660102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.322 [2024-10-21 10:03:41.799518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:05.322 [2024-10-21 10:03:41.802204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:05.322 [2024-10-21 10:03:41.802366] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:05.322 [2024-10-21 10:03:41.802489] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:05.322 [2024-10-21 10:03:41.802560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.322 [2024-10-21 10:03:41.802659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state configuring 00:19:05.322 request: 00:19:05.322 { 00:19:05.322 "name": "raid_bdev1", 00:19:05.322 "raid_level": "raid1", 00:19:05.322 "base_bdevs": [ 00:19:05.322 "malloc1", 00:19:05.322 "malloc2" 00:19:05.322 ], 00:19:05.322 "superblock": false, 00:19:05.322 "method": "bdev_raid_create", 00:19:05.322 "req_id": 1 00:19:05.322 } 00:19:05.322 Got JSON-RPC error response 00:19:05.322 response: 00:19:05.322 { 00:19:05.322 "code": -17, 00:19:05.322 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:05.322 } 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.322 [2024-10-21 10:03:41.855461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:05.322 [2024-10-21 10:03:41.855653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.322 [2024-10-21 10:03:41.855700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:19:05.322 [2024-10-21 10:03:41.855744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.322 [2024-10-21 10:03:41.858436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.322 [2024-10-21 10:03:41.858535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:05.322 [2024-10-21 10:03:41.858663] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:05.322 [2024-10-21 10:03:41.858774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:05.322 pt1 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.322 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.323 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.323 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.323 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.323 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.323 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.323 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.323 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.323 "name": "raid_bdev1", 00:19:05.323 "uuid": "44c99a2f-d034-46ec-83a1-58bb3b8608da", 00:19:05.323 "strip_size_kb": 0, 00:19:05.323 "state": "configuring", 00:19:05.323 "raid_level": "raid1", 00:19:05.323 "superblock": true, 00:19:05.323 "num_base_bdevs": 2, 00:19:05.323 "num_base_bdevs_discovered": 1, 00:19:05.323 "num_base_bdevs_operational": 2, 00:19:05.323 "base_bdevs_list": [ 00:19:05.323 { 00:19:05.323 "name": "pt1", 00:19:05.323 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.323 "is_configured": true, 00:19:05.323 "data_offset": 256, 00:19:05.323 "data_size": 7936 00:19:05.323 }, 00:19:05.323 { 00:19:05.323 "name": null, 00:19:05.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.323 "is_configured": false, 00:19:05.323 "data_offset": 256, 00:19:05.323 "data_size": 7936 00:19:05.323 } 00:19:05.323 ] 00:19:05.323 }' 00:19:05.323 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.323 10:03:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.893 [2024-10-21 10:03:42.306834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.893 [2024-10-21 10:03:42.307095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.893 [2024-10-21 10:03:42.307181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:05.893 [2024-10-21 10:03:42.307227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.893 [2024-10-21 10:03:42.307491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.893 [2024-10-21 10:03:42.307548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.893 [2024-10-21 10:03:42.307664] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:05.893 [2024-10-21 10:03:42.307728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.893 [2024-10-21 10:03:42.307885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:19:05.893 [2024-10-21 10:03:42.307932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:05.893 [2024-10-21 10:03:42.308050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:05.893 [2024-10-21 10:03:42.308173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:19:05.893 [2024-10-21 10:03:42.308217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:19:05.893 [2024-10-21 10:03:42.308369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.893 pt2 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.893 "name": "raid_bdev1", 00:19:05.893 "uuid": "44c99a2f-d034-46ec-83a1-58bb3b8608da", 00:19:05.893 "strip_size_kb": 0, 00:19:05.893 "state": "online", 00:19:05.893 "raid_level": "raid1", 00:19:05.893 "superblock": true, 00:19:05.893 "num_base_bdevs": 2, 00:19:05.893 "num_base_bdevs_discovered": 2, 00:19:05.893 "num_base_bdevs_operational": 2, 00:19:05.893 "base_bdevs_list": [ 00:19:05.893 { 00:19:05.893 "name": "pt1", 00:19:05.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:05.893 "is_configured": true, 00:19:05.893 "data_offset": 256, 00:19:05.893 "data_size": 7936 00:19:05.893 }, 00:19:05.893 { 00:19:05.893 "name": "pt2", 00:19:05.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.893 "is_configured": true, 00:19:05.893 "data_offset": 256, 00:19:05.893 "data_size": 7936 00:19:05.893 } 00:19:05.893 ] 00:19:05.893 }' 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.893 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:06.464 [2024-10-21 10:03:42.798373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.464 "name": "raid_bdev1", 00:19:06.464 "aliases": [ 00:19:06.464 "44c99a2f-d034-46ec-83a1-58bb3b8608da" 00:19:06.464 ], 00:19:06.464 "product_name": "Raid Volume", 00:19:06.464 "block_size": 4128, 00:19:06.464 "num_blocks": 7936, 00:19:06.464 "uuid": "44c99a2f-d034-46ec-83a1-58bb3b8608da", 00:19:06.464 "md_size": 32, 00:19:06.464 "md_interleave": true, 00:19:06.464 "dif_type": 0, 00:19:06.464 "assigned_rate_limits": { 00:19:06.464 "rw_ios_per_sec": 0, 00:19:06.464 "rw_mbytes_per_sec": 0, 00:19:06.464 "r_mbytes_per_sec": 0, 00:19:06.464 "w_mbytes_per_sec": 0 00:19:06.464 }, 00:19:06.464 "claimed": false, 00:19:06.464 "zoned": false, 00:19:06.464 "supported_io_types": { 00:19:06.464 "read": true, 00:19:06.464 "write": true, 00:19:06.464 "unmap": false, 00:19:06.464 "flush": false, 00:19:06.464 "reset": true, 00:19:06.464 "nvme_admin": false, 00:19:06.464 "nvme_io": false, 00:19:06.464 "nvme_io_md": false, 00:19:06.464 "write_zeroes": true, 00:19:06.464 "zcopy": false, 00:19:06.464 "get_zone_info": false, 00:19:06.464 "zone_management": false, 00:19:06.464 "zone_append": false, 00:19:06.464 "compare": false, 00:19:06.464 "compare_and_write": false, 00:19:06.464 "abort": false, 00:19:06.464 "seek_hole": false, 00:19:06.464 "seek_data": false, 00:19:06.464 "copy": false, 00:19:06.464 "nvme_iov_md": false 00:19:06.464 }, 00:19:06.464 "memory_domains": [ 00:19:06.464 { 00:19:06.464 "dma_device_id": "system", 00:19:06.464 "dma_device_type": 1 00:19:06.464 }, 00:19:06.464 { 00:19:06.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.464 "dma_device_type": 2 00:19:06.464 }, 00:19:06.464 { 00:19:06.464 "dma_device_id": "system", 00:19:06.464 "dma_device_type": 1 00:19:06.464 }, 00:19:06.464 { 00:19:06.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.464 "dma_device_type": 2 00:19:06.464 } 00:19:06.464 ], 00:19:06.464 "driver_specific": { 00:19:06.464 "raid": { 00:19:06.464 "uuid": "44c99a2f-d034-46ec-83a1-58bb3b8608da", 00:19:06.464 "strip_size_kb": 0, 00:19:06.464 "state": "online", 00:19:06.464 "raid_level": "raid1", 00:19:06.464 "superblock": true, 00:19:06.464 "num_base_bdevs": 2, 00:19:06.464 "num_base_bdevs_discovered": 2, 00:19:06.464 "num_base_bdevs_operational": 2, 00:19:06.464 "base_bdevs_list": [ 00:19:06.464 { 00:19:06.464 "name": "pt1", 00:19:06.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:06.464 "is_configured": true, 00:19:06.464 "data_offset": 256, 00:19:06.464 "data_size": 7936 00:19:06.464 }, 00:19:06.464 { 00:19:06.464 "name": "pt2", 00:19:06.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.464 "is_configured": true, 00:19:06.464 "data_offset": 256, 00:19:06.464 "data_size": 7936 00:19:06.464 } 00:19:06.464 ] 00:19:06.464 } 00:19:06.464 } 00:19:06.464 }' 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:06.464 pt2' 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.464 10:03:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.464 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.464 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:06.465 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:06.465 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.465 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:06.465 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.465 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.465 [2024-10-21 10:03:43.054000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 44c99a2f-d034-46ec-83a1-58bb3b8608da '!=' 44c99a2f-d034-46ec-83a1-58bb3b8608da ']' 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.725 [2024-10-21 10:03:43.097667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.725 "name": "raid_bdev1", 00:19:06.725 "uuid": "44c99a2f-d034-46ec-83a1-58bb3b8608da", 00:19:06.725 "strip_size_kb": 0, 00:19:06.725 "state": "online", 00:19:06.725 "raid_level": "raid1", 00:19:06.725 "superblock": true, 00:19:06.725 "num_base_bdevs": 2, 00:19:06.725 "num_base_bdevs_discovered": 1, 00:19:06.725 "num_base_bdevs_operational": 1, 00:19:06.725 "base_bdevs_list": [ 00:19:06.725 { 00:19:06.725 "name": null, 00:19:06.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.725 "is_configured": false, 00:19:06.725 "data_offset": 0, 00:19:06.725 "data_size": 7936 00:19:06.725 }, 00:19:06.725 { 00:19:06.725 "name": "pt2", 00:19:06.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.725 "is_configured": true, 00:19:06.725 "data_offset": 256, 00:19:06.725 "data_size": 7936 00:19:06.725 } 00:19:06.725 ] 00:19:06.725 }' 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.725 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.986 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:06.986 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.986 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.986 [2024-10-21 10:03:43.544828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.986 [2024-10-21 10:03:43.544987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.986 [2024-10-21 10:03:43.545128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.986 [2024-10-21 10:03:43.545217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.986 [2024-10-21 10:03:43.545266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:19:06.986 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.986 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.986 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.986 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.986 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:06.986 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.307 [2024-10-21 10:03:43.620649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.307 [2024-10-21 10:03:43.620862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.307 [2024-10-21 10:03:43.620889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:07.307 [2024-10-21 10:03:43.620901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.307 [2024-10-21 10:03:43.623250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.307 [2024-10-21 10:03:43.623293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.307 [2024-10-21 10:03:43.623362] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:07.307 [2024-10-21 10:03:43.623427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.307 [2024-10-21 10:03:43.623511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:19:07.307 [2024-10-21 10:03:43.623523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:07.307 [2024-10-21 10:03:43.623641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:07.307 [2024-10-21 10:03:43.623717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:19:07.307 [2024-10-21 10:03:43.623725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:19:07.307 [2024-10-21 10:03:43.623799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.307 pt2 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.307 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.307 "name": "raid_bdev1", 00:19:07.308 "uuid": "44c99a2f-d034-46ec-83a1-58bb3b8608da", 00:19:07.308 "strip_size_kb": 0, 00:19:07.308 "state": "online", 00:19:07.308 "raid_level": "raid1", 00:19:07.308 "superblock": true, 00:19:07.308 "num_base_bdevs": 2, 00:19:07.308 "num_base_bdevs_discovered": 1, 00:19:07.308 "num_base_bdevs_operational": 1, 00:19:07.308 "base_bdevs_list": [ 00:19:07.308 { 00:19:07.308 "name": null, 00:19:07.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.308 "is_configured": false, 00:19:07.308 "data_offset": 256, 00:19:07.308 "data_size": 7936 00:19:07.308 }, 00:19:07.308 { 00:19:07.308 "name": "pt2", 00:19:07.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.308 "is_configured": true, 00:19:07.308 "data_offset": 256, 00:19:07.308 "data_size": 7936 00:19:07.308 } 00:19:07.308 ] 00:19:07.308 }' 00:19:07.308 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.308 10:03:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.581 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.581 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.581 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.581 [2024-10-21 10:03:44.079831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.581 [2024-10-21 10:03:44.079977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.581 [2024-10-21 10:03:44.080093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.581 [2024-10-21 10:03:44.080173] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.581 [2024-10-21 10:03:44.080249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.582 [2024-10-21 10:03:44.139772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:07.582 [2024-10-21 10:03:44.139966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.582 [2024-10-21 10:03:44.140008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:07.582 [2024-10-21 10:03:44.140036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.582 [2024-10-21 10:03:44.142407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.582 [2024-10-21 10:03:44.142517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:07.582 [2024-10-21 10:03:44.142643] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:07.582 [2024-10-21 10:03:44.142745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:07.582 [2024-10-21 10:03:44.142888] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:07.582 [2024-10-21 10:03:44.142944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.582 [2024-10-21 10:03:44.142995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state configuring 00:19:07.582 [2024-10-21 10:03:44.143111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.582 [2024-10-21 10:03:44.143241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:19:07.582 [2024-10-21 10:03:44.143279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:07.582 [2024-10-21 10:03:44.143396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:07.582 [2024-10-21 10:03:44.143515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:19:07.582 [2024-10-21 10:03:44.143557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:19:07.582 [2024-10-21 10:03:44.143741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.582 pt1 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.582 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.842 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.842 "name": "raid_bdev1", 00:19:07.842 "uuid": "44c99a2f-d034-46ec-83a1-58bb3b8608da", 00:19:07.842 "strip_size_kb": 0, 00:19:07.842 "state": "online", 00:19:07.842 "raid_level": "raid1", 00:19:07.842 "superblock": true, 00:19:07.842 "num_base_bdevs": 2, 00:19:07.842 "num_base_bdevs_discovered": 1, 00:19:07.842 "num_base_bdevs_operational": 1, 00:19:07.842 "base_bdevs_list": [ 00:19:07.842 { 00:19:07.842 "name": null, 00:19:07.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.842 "is_configured": false, 00:19:07.842 "data_offset": 256, 00:19:07.842 "data_size": 7936 00:19:07.842 }, 00:19:07.842 { 00:19:07.842 "name": "pt2", 00:19:07.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:07.842 "is_configured": true, 00:19:07.842 "data_offset": 256, 00:19:07.842 "data_size": 7936 00:19:07.842 } 00:19:07.842 ] 00:19:07.842 }' 00:19:07.842 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.842 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.101 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:08.101 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:08.101 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.101 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.101 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.101 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:08.101 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:08.101 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.101 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.101 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.101 [2024-10-21 10:03:44.679154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 44c99a2f-d034-46ec-83a1-58bb3b8608da '!=' 44c99a2f-d034-46ec-83a1-58bb3b8608da ']' 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88429 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88429 ']' 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88429 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88429 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88429' 00:19:08.361 killing process with pid 88429 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 88429 00:19:08.361 10:03:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 88429 00:19:08.361 [2024-10-21 10:03:44.749312] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.361 [2024-10-21 10:03:44.749450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.361 [2024-10-21 10:03:44.749602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.361 [2024-10-21 10:03:44.749629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:19:08.620 [2024-10-21 10:03:44.980038] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:09.999 10:03:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:10.000 00:19:10.000 real 0m6.533s 00:19:10.000 user 0m9.817s 00:19:10.000 sys 0m1.201s 00:19:10.000 ************************************ 00:19:10.000 END TEST raid_superblock_test_md_interleaved 00:19:10.000 ************************************ 00:19:10.000 10:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:10.000 10:03:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.000 10:03:46 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:10.000 10:03:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:10.000 10:03:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:10.000 10:03:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.000 ************************************ 00:19:10.000 START TEST raid_rebuild_test_sb_md_interleaved 00:19:10.000 ************************************ 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88757 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88757 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88757 ']' 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.000 10:03:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.000 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:10.000 Zero copy mechanism will not be used. 00:19:10.000 [2024-10-21 10:03:46.394237] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:10.000 [2024-10-21 10:03:46.394445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88757 ] 00:19:10.000 [2024-10-21 10:03:46.573671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.260 [2024-10-21 10:03:46.721567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.520 [2024-10-21 10:03:46.980041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.520 [2024-10-21 10:03:46.980134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.779 BaseBdev1_malloc 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.779 [2024-10-21 10:03:47.343119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:10.779 [2024-10-21 10:03:47.343280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.779 [2024-10-21 10:03:47.343306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:19:10.779 [2024-10-21 10:03:47.343319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.779 [2024-10-21 10:03:47.345470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.779 [2024-10-21 10:03:47.345511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:10.779 BaseBdev1 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.779 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.039 BaseBdev2_malloc 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.039 [2024-10-21 10:03:47.395679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:11.039 [2024-10-21 10:03:47.395823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.039 [2024-10-21 10:03:47.395861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:11.039 [2024-10-21 10:03:47.395895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.039 [2024-10-21 10:03:47.398029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.039 BaseBdev2 00:19:11.039 [2024-10-21 10:03:47.398102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.039 spare_malloc 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.039 spare_delay 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.039 [2024-10-21 10:03:47.480625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:11.039 [2024-10-21 10:03:47.480764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.039 [2024-10-21 10:03:47.480804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:11.039 [2024-10-21 10:03:47.480838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.039 [2024-10-21 10:03:47.482999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.039 [2024-10-21 10:03:47.483076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:11.039 spare 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.039 [2024-10-21 10:03:47.488654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.039 [2024-10-21 10:03:47.490809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.039 [2024-10-21 10:03:47.491058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005b80 00:19:11.039 [2024-10-21 10:03:47.491116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:11.039 [2024-10-21 10:03:47.491231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:11.039 [2024-10-21 10:03:47.491338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005b80 00:19:11.039 [2024-10-21 10:03:47.491376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005b80 00:19:11.039 [2024-10-21 10:03:47.491486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.039 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.039 "name": "raid_bdev1", 00:19:11.039 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:11.039 "strip_size_kb": 0, 00:19:11.040 "state": "online", 00:19:11.040 "raid_level": "raid1", 00:19:11.040 "superblock": true, 00:19:11.040 "num_base_bdevs": 2, 00:19:11.040 "num_base_bdevs_discovered": 2, 00:19:11.040 "num_base_bdevs_operational": 2, 00:19:11.040 "base_bdevs_list": [ 00:19:11.040 { 00:19:11.040 "name": "BaseBdev1", 00:19:11.040 "uuid": "20f906f3-b105-55dc-bd96-45970a513898", 00:19:11.040 "is_configured": true, 00:19:11.040 "data_offset": 256, 00:19:11.040 "data_size": 7936 00:19:11.040 }, 00:19:11.040 { 00:19:11.040 "name": "BaseBdev2", 00:19:11.040 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:11.040 "is_configured": true, 00:19:11.040 "data_offset": 256, 00:19:11.040 "data_size": 7936 00:19:11.040 } 00:19:11.040 ] 00:19:11.040 }' 00:19:11.040 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.040 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.609 [2024-10-21 10:03:47.936169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.609 10:03:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.609 [2024-10-21 10:03:48.047673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.609 "name": "raid_bdev1", 00:19:11.609 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:11.609 "strip_size_kb": 0, 00:19:11.609 "state": "online", 00:19:11.609 "raid_level": "raid1", 00:19:11.609 "superblock": true, 00:19:11.609 "num_base_bdevs": 2, 00:19:11.609 "num_base_bdevs_discovered": 1, 00:19:11.609 "num_base_bdevs_operational": 1, 00:19:11.609 "base_bdevs_list": [ 00:19:11.609 { 00:19:11.609 "name": null, 00:19:11.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.609 "is_configured": false, 00:19:11.609 "data_offset": 0, 00:19:11.609 "data_size": 7936 00:19:11.609 }, 00:19:11.609 { 00:19:11.609 "name": "BaseBdev2", 00:19:11.609 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:11.609 "is_configured": true, 00:19:11.609 "data_offset": 256, 00:19:11.609 "data_size": 7936 00:19:11.609 } 00:19:11.609 ] 00:19:11.609 }' 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.609 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.178 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:12.178 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.178 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.178 [2024-10-21 10:03:48.482973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.178 [2024-10-21 10:03:48.503994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:12.178 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.178 10:03:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:12.178 [2024-10-21 10:03:48.506382] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.114 "name": "raid_bdev1", 00:19:13.114 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:13.114 "strip_size_kb": 0, 00:19:13.114 "state": "online", 00:19:13.114 "raid_level": "raid1", 00:19:13.114 "superblock": true, 00:19:13.114 "num_base_bdevs": 2, 00:19:13.114 "num_base_bdevs_discovered": 2, 00:19:13.114 "num_base_bdevs_operational": 2, 00:19:13.114 "process": { 00:19:13.114 "type": "rebuild", 00:19:13.114 "target": "spare", 00:19:13.114 "progress": { 00:19:13.114 "blocks": 2560, 00:19:13.114 "percent": 32 00:19:13.114 } 00:19:13.114 }, 00:19:13.114 "base_bdevs_list": [ 00:19:13.114 { 00:19:13.114 "name": "spare", 00:19:13.114 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:13.114 "is_configured": true, 00:19:13.114 "data_offset": 256, 00:19:13.114 "data_size": 7936 00:19:13.114 }, 00:19:13.114 { 00:19:13.114 "name": "BaseBdev2", 00:19:13.114 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:13.114 "is_configured": true, 00:19:13.114 "data_offset": 256, 00:19:13.114 "data_size": 7936 00:19:13.114 } 00:19:13.114 ] 00:19:13.114 }' 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.114 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.114 [2024-10-21 10:03:49.654701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.376 [2024-10-21 10:03:49.716495] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.376 [2024-10-21 10:03:49.716588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.376 [2024-10-21 10:03:49.716604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.376 [2024-10-21 10:03:49.716615] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.376 "name": "raid_bdev1", 00:19:13.376 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:13.376 "strip_size_kb": 0, 00:19:13.376 "state": "online", 00:19:13.376 "raid_level": "raid1", 00:19:13.376 "superblock": true, 00:19:13.376 "num_base_bdevs": 2, 00:19:13.376 "num_base_bdevs_discovered": 1, 00:19:13.376 "num_base_bdevs_operational": 1, 00:19:13.376 "base_bdevs_list": [ 00:19:13.376 { 00:19:13.376 "name": null, 00:19:13.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.376 "is_configured": false, 00:19:13.376 "data_offset": 0, 00:19:13.376 "data_size": 7936 00:19:13.376 }, 00:19:13.376 { 00:19:13.376 "name": "BaseBdev2", 00:19:13.376 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:13.376 "is_configured": true, 00:19:13.376 "data_offset": 256, 00:19:13.376 "data_size": 7936 00:19:13.376 } 00:19:13.376 ] 00:19:13.376 }' 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.376 10:03:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.639 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.639 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.639 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.639 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.639 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.639 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.639 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.639 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.639 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.639 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.899 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.899 "name": "raid_bdev1", 00:19:13.899 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:13.899 "strip_size_kb": 0, 00:19:13.899 "state": "online", 00:19:13.899 "raid_level": "raid1", 00:19:13.899 "superblock": true, 00:19:13.899 "num_base_bdevs": 2, 00:19:13.899 "num_base_bdevs_discovered": 1, 00:19:13.899 "num_base_bdevs_operational": 1, 00:19:13.899 "base_bdevs_list": [ 00:19:13.899 { 00:19:13.899 "name": null, 00:19:13.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.899 "is_configured": false, 00:19:13.899 "data_offset": 0, 00:19:13.899 "data_size": 7936 00:19:13.899 }, 00:19:13.899 { 00:19:13.899 "name": "BaseBdev2", 00:19:13.899 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:13.899 "is_configured": true, 00:19:13.899 "data_offset": 256, 00:19:13.899 "data_size": 7936 00:19:13.899 } 00:19:13.899 ] 00:19:13.899 }' 00:19:13.899 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.899 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.899 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.899 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.899 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:13.899 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.899 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.899 [2024-10-21 10:03:50.363979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.899 [2024-10-21 10:03:50.384360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:13.899 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.899 [2024-10-21 10:03:50.386760] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.899 10:03:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:14.837 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.837 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.837 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.837 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.837 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.837 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.837 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.837 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.837 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.837 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.096 "name": "raid_bdev1", 00:19:15.096 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:15.096 "strip_size_kb": 0, 00:19:15.096 "state": "online", 00:19:15.096 "raid_level": "raid1", 00:19:15.096 "superblock": true, 00:19:15.096 "num_base_bdevs": 2, 00:19:15.096 "num_base_bdevs_discovered": 2, 00:19:15.096 "num_base_bdevs_operational": 2, 00:19:15.096 "process": { 00:19:15.096 "type": "rebuild", 00:19:15.096 "target": "spare", 00:19:15.096 "progress": { 00:19:15.096 "blocks": 2560, 00:19:15.096 "percent": 32 00:19:15.096 } 00:19:15.096 }, 00:19:15.096 "base_bdevs_list": [ 00:19:15.096 { 00:19:15.096 "name": "spare", 00:19:15.096 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:15.096 "is_configured": true, 00:19:15.096 "data_offset": 256, 00:19:15.096 "data_size": 7936 00:19:15.096 }, 00:19:15.096 { 00:19:15.096 "name": "BaseBdev2", 00:19:15.096 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:15.096 "is_configured": true, 00:19:15.096 "data_offset": 256, 00:19:15.096 "data_size": 7936 00:19:15.096 } 00:19:15.096 ] 00:19:15.096 }' 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:15.096 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=758 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.096 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.096 "name": "raid_bdev1", 00:19:15.096 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:15.096 "strip_size_kb": 0, 00:19:15.096 "state": "online", 00:19:15.096 "raid_level": "raid1", 00:19:15.096 "superblock": true, 00:19:15.096 "num_base_bdevs": 2, 00:19:15.096 "num_base_bdevs_discovered": 2, 00:19:15.096 "num_base_bdevs_operational": 2, 00:19:15.096 "process": { 00:19:15.096 "type": "rebuild", 00:19:15.096 "target": "spare", 00:19:15.096 "progress": { 00:19:15.096 "blocks": 2816, 00:19:15.096 "percent": 35 00:19:15.096 } 00:19:15.096 }, 00:19:15.096 "base_bdevs_list": [ 00:19:15.096 { 00:19:15.096 "name": "spare", 00:19:15.096 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:15.096 "is_configured": true, 00:19:15.096 "data_offset": 256, 00:19:15.096 "data_size": 7936 00:19:15.096 }, 00:19:15.096 { 00:19:15.096 "name": "BaseBdev2", 00:19:15.096 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:15.096 "is_configured": true, 00:19:15.096 "data_offset": 256, 00:19:15.096 "data_size": 7936 00:19:15.096 } 00:19:15.096 ] 00:19:15.096 }' 00:19:15.097 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.097 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.097 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.097 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.097 10:03:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.475 "name": "raid_bdev1", 00:19:16.475 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:16.475 "strip_size_kb": 0, 00:19:16.475 "state": "online", 00:19:16.475 "raid_level": "raid1", 00:19:16.475 "superblock": true, 00:19:16.475 "num_base_bdevs": 2, 00:19:16.475 "num_base_bdevs_discovered": 2, 00:19:16.475 "num_base_bdevs_operational": 2, 00:19:16.475 "process": { 00:19:16.475 "type": "rebuild", 00:19:16.475 "target": "spare", 00:19:16.475 "progress": { 00:19:16.475 "blocks": 5632, 00:19:16.475 "percent": 70 00:19:16.475 } 00:19:16.475 }, 00:19:16.475 "base_bdevs_list": [ 00:19:16.475 { 00:19:16.475 "name": "spare", 00:19:16.475 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:16.475 "is_configured": true, 00:19:16.475 "data_offset": 256, 00:19:16.475 "data_size": 7936 00:19:16.475 }, 00:19:16.475 { 00:19:16.475 "name": "BaseBdev2", 00:19:16.475 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:16.475 "is_configured": true, 00:19:16.475 "data_offset": 256, 00:19:16.475 "data_size": 7936 00:19:16.475 } 00:19:16.475 ] 00:19:16.475 }' 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.475 10:03:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:17.043 [2024-10-21 10:03:53.513316] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:17.043 [2024-10-21 10:03:53.513419] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:17.043 [2024-10-21 10:03:53.513580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.303 "name": "raid_bdev1", 00:19:17.303 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:17.303 "strip_size_kb": 0, 00:19:17.303 "state": "online", 00:19:17.303 "raid_level": "raid1", 00:19:17.303 "superblock": true, 00:19:17.303 "num_base_bdevs": 2, 00:19:17.303 "num_base_bdevs_discovered": 2, 00:19:17.303 "num_base_bdevs_operational": 2, 00:19:17.303 "base_bdevs_list": [ 00:19:17.303 { 00:19:17.303 "name": "spare", 00:19:17.303 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:17.303 "is_configured": true, 00:19:17.303 "data_offset": 256, 00:19:17.303 "data_size": 7936 00:19:17.303 }, 00:19:17.303 { 00:19:17.303 "name": "BaseBdev2", 00:19:17.303 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:17.303 "is_configured": true, 00:19:17.303 "data_offset": 256, 00:19:17.303 "data_size": 7936 00:19:17.303 } 00:19:17.303 ] 00:19:17.303 }' 00:19:17.303 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.562 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.563 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.563 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.563 10:03:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.563 "name": "raid_bdev1", 00:19:17.563 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:17.563 "strip_size_kb": 0, 00:19:17.563 "state": "online", 00:19:17.563 "raid_level": "raid1", 00:19:17.563 "superblock": true, 00:19:17.563 "num_base_bdevs": 2, 00:19:17.563 "num_base_bdevs_discovered": 2, 00:19:17.563 "num_base_bdevs_operational": 2, 00:19:17.563 "base_bdevs_list": [ 00:19:17.563 { 00:19:17.563 "name": "spare", 00:19:17.563 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:17.563 "is_configured": true, 00:19:17.563 "data_offset": 256, 00:19:17.563 "data_size": 7936 00:19:17.563 }, 00:19:17.563 { 00:19:17.563 "name": "BaseBdev2", 00:19:17.563 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:17.563 "is_configured": true, 00:19:17.563 "data_offset": 256, 00:19:17.563 "data_size": 7936 00:19:17.563 } 00:19:17.563 ] 00:19:17.563 }' 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.563 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.822 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.822 "name": "raid_bdev1", 00:19:17.822 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:17.822 "strip_size_kb": 0, 00:19:17.822 "state": "online", 00:19:17.822 "raid_level": "raid1", 00:19:17.822 "superblock": true, 00:19:17.822 "num_base_bdevs": 2, 00:19:17.822 "num_base_bdevs_discovered": 2, 00:19:17.822 "num_base_bdevs_operational": 2, 00:19:17.822 "base_bdevs_list": [ 00:19:17.822 { 00:19:17.822 "name": "spare", 00:19:17.822 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:17.822 "is_configured": true, 00:19:17.822 "data_offset": 256, 00:19:17.822 "data_size": 7936 00:19:17.822 }, 00:19:17.822 { 00:19:17.822 "name": "BaseBdev2", 00:19:17.822 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:17.822 "is_configured": true, 00:19:17.822 "data_offset": 256, 00:19:17.822 "data_size": 7936 00:19:17.822 } 00:19:17.822 ] 00:19:17.822 }' 00:19:17.822 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.822 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.081 [2024-10-21 10:03:54.583242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.081 [2024-10-21 10:03:54.583391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.081 [2024-10-21 10:03:54.583524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.081 [2024-10-21 10:03:54.583639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.081 [2024-10-21 10:03:54.583724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005b80 name raid_bdev1, state offline 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.081 [2024-10-21 10:03:54.659082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.081 [2024-10-21 10:03:54.659267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.081 [2024-10-21 10:03:54.659308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:18.081 [2024-10-21 10:03:54.659337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.081 [2024-10-21 10:03:54.661694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.081 [2024-10-21 10:03:54.661768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.081 [2024-10-21 10:03:54.661857] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:18.081 [2024-10-21 10:03:54.661950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.081 [2024-10-21 10:03:54.662102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.081 spare 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.081 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.341 [2024-10-21 10:03:54.762064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000005f00 00:19:18.341 [2024-10-21 10:03:54.762208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:18.341 [2024-10-21 10:03:54.762374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:18.341 [2024-10-21 10:03:54.762526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000005f00 00:19:18.341 [2024-10-21 10:03:54.762562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000005f00 00:19:18.341 [2024-10-21 10:03:54.762736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.341 "name": "raid_bdev1", 00:19:18.341 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:18.341 "strip_size_kb": 0, 00:19:18.341 "state": "online", 00:19:18.341 "raid_level": "raid1", 00:19:18.341 "superblock": true, 00:19:18.341 "num_base_bdevs": 2, 00:19:18.341 "num_base_bdevs_discovered": 2, 00:19:18.341 "num_base_bdevs_operational": 2, 00:19:18.341 "base_bdevs_list": [ 00:19:18.341 { 00:19:18.341 "name": "spare", 00:19:18.341 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:18.341 "is_configured": true, 00:19:18.341 "data_offset": 256, 00:19:18.341 "data_size": 7936 00:19:18.341 }, 00:19:18.341 { 00:19:18.341 "name": "BaseBdev2", 00:19:18.341 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:18.341 "is_configured": true, 00:19:18.341 "data_offset": 256, 00:19:18.341 "data_size": 7936 00:19:18.341 } 00:19:18.341 ] 00:19:18.341 }' 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.341 10:03:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.910 "name": "raid_bdev1", 00:19:18.910 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:18.910 "strip_size_kb": 0, 00:19:18.910 "state": "online", 00:19:18.910 "raid_level": "raid1", 00:19:18.910 "superblock": true, 00:19:18.910 "num_base_bdevs": 2, 00:19:18.910 "num_base_bdevs_discovered": 2, 00:19:18.910 "num_base_bdevs_operational": 2, 00:19:18.910 "base_bdevs_list": [ 00:19:18.910 { 00:19:18.910 "name": "spare", 00:19:18.910 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:18.910 "is_configured": true, 00:19:18.910 "data_offset": 256, 00:19:18.910 "data_size": 7936 00:19:18.910 }, 00:19:18.910 { 00:19:18.910 "name": "BaseBdev2", 00:19:18.910 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:18.910 "is_configured": true, 00:19:18.910 "data_offset": 256, 00:19:18.910 "data_size": 7936 00:19:18.910 } 00:19:18.910 ] 00:19:18.910 }' 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.910 [2024-10-21 10:03:55.382022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.910 "name": "raid_bdev1", 00:19:18.910 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:18.910 "strip_size_kb": 0, 00:19:18.910 "state": "online", 00:19:18.910 "raid_level": "raid1", 00:19:18.910 "superblock": true, 00:19:18.910 "num_base_bdevs": 2, 00:19:18.910 "num_base_bdevs_discovered": 1, 00:19:18.910 "num_base_bdevs_operational": 1, 00:19:18.910 "base_bdevs_list": [ 00:19:18.910 { 00:19:18.910 "name": null, 00:19:18.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.910 "is_configured": false, 00:19:18.910 "data_offset": 0, 00:19:18.910 "data_size": 7936 00:19:18.910 }, 00:19:18.910 { 00:19:18.910 "name": "BaseBdev2", 00:19:18.910 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:18.910 "is_configured": true, 00:19:18.910 "data_offset": 256, 00:19:18.910 "data_size": 7936 00:19:18.910 } 00:19:18.910 ] 00:19:18.910 }' 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.910 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.480 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:19.480 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.480 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.480 [2024-10-21 10:03:55.833306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.480 [2024-10-21 10:03:55.833668] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:19.480 [2024-10-21 10:03:55.833691] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:19.480 [2024-10-21 10:03:55.833735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.480 [2024-10-21 10:03:55.852973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:19.480 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.480 10:03:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:19.480 [2024-10-21 10:03:55.855236] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.418 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.418 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.418 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.418 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.418 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.418 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.418 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.418 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.418 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.419 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.419 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.419 "name": "raid_bdev1", 00:19:20.419 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:20.419 "strip_size_kb": 0, 00:19:20.419 "state": "online", 00:19:20.419 "raid_level": "raid1", 00:19:20.419 "superblock": true, 00:19:20.419 "num_base_bdevs": 2, 00:19:20.419 "num_base_bdevs_discovered": 2, 00:19:20.419 "num_base_bdevs_operational": 2, 00:19:20.419 "process": { 00:19:20.419 "type": "rebuild", 00:19:20.419 "target": "spare", 00:19:20.419 "progress": { 00:19:20.419 "blocks": 2560, 00:19:20.419 "percent": 32 00:19:20.419 } 00:19:20.419 }, 00:19:20.419 "base_bdevs_list": [ 00:19:20.419 { 00:19:20.419 "name": "spare", 00:19:20.419 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:20.419 "is_configured": true, 00:19:20.419 "data_offset": 256, 00:19:20.419 "data_size": 7936 00:19:20.419 }, 00:19:20.419 { 00:19:20.419 "name": "BaseBdev2", 00:19:20.419 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:20.419 "is_configured": true, 00:19:20.419 "data_offset": 256, 00:19:20.419 "data_size": 7936 00:19:20.419 } 00:19:20.419 ] 00:19:20.419 }' 00:19:20.419 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.419 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.419 10:03:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.419 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.419 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:20.419 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.419 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.679 [2024-10-21 10:03:57.015349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.679 [2024-10-21 10:03:57.065751] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.679 [2024-10-21 10:03:57.065971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.679 [2024-10-21 10:03:57.066009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.679 [2024-10-21 10:03:57.066033] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.679 "name": "raid_bdev1", 00:19:20.679 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:20.679 "strip_size_kb": 0, 00:19:20.679 "state": "online", 00:19:20.679 "raid_level": "raid1", 00:19:20.679 "superblock": true, 00:19:20.679 "num_base_bdevs": 2, 00:19:20.679 "num_base_bdevs_discovered": 1, 00:19:20.679 "num_base_bdevs_operational": 1, 00:19:20.679 "base_bdevs_list": [ 00:19:20.679 { 00:19:20.679 "name": null, 00:19:20.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.679 "is_configured": false, 00:19:20.679 "data_offset": 0, 00:19:20.679 "data_size": 7936 00:19:20.679 }, 00:19:20.679 { 00:19:20.679 "name": "BaseBdev2", 00:19:20.679 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:20.679 "is_configured": true, 00:19:20.679 "data_offset": 256, 00:19:20.679 "data_size": 7936 00:19:20.679 } 00:19:20.679 ] 00:19:20.679 }' 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.679 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.247 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:21.247 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.247 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.247 [2024-10-21 10:03:57.560771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:21.247 [2024-10-21 10:03:57.560960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.247 [2024-10-21 10:03:57.561004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:21.247 [2024-10-21 10:03:57.561038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.247 [2024-10-21 10:03:57.561326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.247 [2024-10-21 10:03:57.561386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:21.247 [2024-10-21 10:03:57.561484] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:21.247 [2024-10-21 10:03:57.561536] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.247 [2024-10-21 10:03:57.561549] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:21.247 [2024-10-21 10:03:57.561591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.247 [2024-10-21 10:03:57.580398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:21.247 spare 00:19:21.247 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.247 10:03:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:21.247 [2024-10-21 10:03:57.582721] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.184 "name": "raid_bdev1", 00:19:22.184 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:22.184 "strip_size_kb": 0, 00:19:22.184 "state": "online", 00:19:22.184 "raid_level": "raid1", 00:19:22.184 "superblock": true, 00:19:22.184 "num_base_bdevs": 2, 00:19:22.184 "num_base_bdevs_discovered": 2, 00:19:22.184 "num_base_bdevs_operational": 2, 00:19:22.184 "process": { 00:19:22.184 "type": "rebuild", 00:19:22.184 "target": "spare", 00:19:22.184 "progress": { 00:19:22.184 "blocks": 2560, 00:19:22.184 "percent": 32 00:19:22.184 } 00:19:22.184 }, 00:19:22.184 "base_bdevs_list": [ 00:19:22.184 { 00:19:22.184 "name": "spare", 00:19:22.184 "uuid": "a5ae62fd-ecb6-56f5-8167-cd5838f0bd3c", 00:19:22.184 "is_configured": true, 00:19:22.184 "data_offset": 256, 00:19:22.184 "data_size": 7936 00:19:22.184 }, 00:19:22.184 { 00:19:22.184 "name": "BaseBdev2", 00:19:22.184 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:22.184 "is_configured": true, 00:19:22.184 "data_offset": 256, 00:19:22.184 "data_size": 7936 00:19:22.184 } 00:19:22.184 ] 00:19:22.184 }' 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.184 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.185 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.185 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.185 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:22.185 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.185 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.185 [2024-10-21 10:03:58.725493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.443 [2024-10-21 10:03:58.793264] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.443 [2024-10-21 10:03:58.793469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.443 [2024-10-21 10:03:58.793514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.443 [2024-10-21 10:03:58.793535] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.443 "name": "raid_bdev1", 00:19:22.443 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:22.443 "strip_size_kb": 0, 00:19:22.443 "state": "online", 00:19:22.443 "raid_level": "raid1", 00:19:22.443 "superblock": true, 00:19:22.443 "num_base_bdevs": 2, 00:19:22.443 "num_base_bdevs_discovered": 1, 00:19:22.443 "num_base_bdevs_operational": 1, 00:19:22.443 "base_bdevs_list": [ 00:19:22.443 { 00:19:22.443 "name": null, 00:19:22.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.443 "is_configured": false, 00:19:22.443 "data_offset": 0, 00:19:22.443 "data_size": 7936 00:19:22.443 }, 00:19:22.443 { 00:19:22.443 "name": "BaseBdev2", 00:19:22.443 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:22.443 "is_configured": true, 00:19:22.443 "data_offset": 256, 00:19:22.443 "data_size": 7936 00:19:22.443 } 00:19:22.443 ] 00:19:22.443 }' 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.443 10:03:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.701 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.701 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.701 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.701 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.701 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.701 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.701 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.701 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.701 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.960 "name": "raid_bdev1", 00:19:22.960 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:22.960 "strip_size_kb": 0, 00:19:22.960 "state": "online", 00:19:22.960 "raid_level": "raid1", 00:19:22.960 "superblock": true, 00:19:22.960 "num_base_bdevs": 2, 00:19:22.960 "num_base_bdevs_discovered": 1, 00:19:22.960 "num_base_bdevs_operational": 1, 00:19:22.960 "base_bdevs_list": [ 00:19:22.960 { 00:19:22.960 "name": null, 00:19:22.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.960 "is_configured": false, 00:19:22.960 "data_offset": 0, 00:19:22.960 "data_size": 7936 00:19:22.960 }, 00:19:22.960 { 00:19:22.960 "name": "BaseBdev2", 00:19:22.960 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:22.960 "is_configured": true, 00:19:22.960 "data_offset": 256, 00:19:22.960 "data_size": 7936 00:19:22.960 } 00:19:22.960 ] 00:19:22.960 }' 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.960 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.960 [2024-10-21 10:03:59.444133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:22.960 [2024-10-21 10:03:59.444309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.961 [2024-10-21 10:03:59.444357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:22.961 [2024-10-21 10:03:59.444369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.961 [2024-10-21 10:03:59.444598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.961 [2024-10-21 10:03:59.444613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:22.961 [2024-10-21 10:03:59.444675] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:22.961 [2024-10-21 10:03:59.444693] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:22.961 [2024-10-21 10:03:59.444703] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:22.961 [2024-10-21 10:03:59.444716] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:22.961 BaseBdev1 00:19:22.961 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.961 10:03:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.897 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.898 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.898 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.898 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.156 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.156 "name": "raid_bdev1", 00:19:24.156 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:24.156 "strip_size_kb": 0, 00:19:24.156 "state": "online", 00:19:24.156 "raid_level": "raid1", 00:19:24.156 "superblock": true, 00:19:24.156 "num_base_bdevs": 2, 00:19:24.157 "num_base_bdevs_discovered": 1, 00:19:24.157 "num_base_bdevs_operational": 1, 00:19:24.157 "base_bdevs_list": [ 00:19:24.157 { 00:19:24.157 "name": null, 00:19:24.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.157 "is_configured": false, 00:19:24.157 "data_offset": 0, 00:19:24.157 "data_size": 7936 00:19:24.157 }, 00:19:24.157 { 00:19:24.157 "name": "BaseBdev2", 00:19:24.157 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:24.157 "is_configured": true, 00:19:24.157 "data_offset": 256, 00:19:24.157 "data_size": 7936 00:19:24.157 } 00:19:24.157 ] 00:19:24.157 }' 00:19:24.157 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.157 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.416 "name": "raid_bdev1", 00:19:24.416 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:24.416 "strip_size_kb": 0, 00:19:24.416 "state": "online", 00:19:24.416 "raid_level": "raid1", 00:19:24.416 "superblock": true, 00:19:24.416 "num_base_bdevs": 2, 00:19:24.416 "num_base_bdevs_discovered": 1, 00:19:24.416 "num_base_bdevs_operational": 1, 00:19:24.416 "base_bdevs_list": [ 00:19:24.416 { 00:19:24.416 "name": null, 00:19:24.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.416 "is_configured": false, 00:19:24.416 "data_offset": 0, 00:19:24.416 "data_size": 7936 00:19:24.416 }, 00:19:24.416 { 00:19:24.416 "name": "BaseBdev2", 00:19:24.416 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:24.416 "is_configured": true, 00:19:24.416 "data_offset": 256, 00:19:24.416 "data_size": 7936 00:19:24.416 } 00:19:24.416 ] 00:19:24.416 }' 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.416 10:04:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.675 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.675 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.675 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:24.675 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.675 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:24.675 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.675 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:24.676 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.676 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:24.676 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.676 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.676 [2024-10-21 10:04:01.025904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.676 [2024-10-21 10:04:01.026197] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:24.676 [2024-10-21 10:04:01.026267] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:24.676 request: 00:19:24.676 { 00:19:24.676 "base_bdev": "BaseBdev1", 00:19:24.676 "raid_bdev": "raid_bdev1", 00:19:24.676 "method": "bdev_raid_add_base_bdev", 00:19:24.676 "req_id": 1 00:19:24.676 } 00:19:24.676 Got JSON-RPC error response 00:19:24.676 response: 00:19:24.676 { 00:19:24.676 "code": -22, 00:19:24.676 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:24.676 } 00:19:24.676 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:24.676 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:24.676 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.676 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.676 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.676 10:04:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.613 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.613 "name": "raid_bdev1", 00:19:25.613 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:25.613 "strip_size_kb": 0, 00:19:25.613 "state": "online", 00:19:25.613 "raid_level": "raid1", 00:19:25.613 "superblock": true, 00:19:25.613 "num_base_bdevs": 2, 00:19:25.613 "num_base_bdevs_discovered": 1, 00:19:25.613 "num_base_bdevs_operational": 1, 00:19:25.613 "base_bdevs_list": [ 00:19:25.613 { 00:19:25.613 "name": null, 00:19:25.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.613 "is_configured": false, 00:19:25.613 "data_offset": 0, 00:19:25.613 "data_size": 7936 00:19:25.613 }, 00:19:25.613 { 00:19:25.613 "name": "BaseBdev2", 00:19:25.613 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:25.613 "is_configured": true, 00:19:25.613 "data_offset": 256, 00:19:25.613 "data_size": 7936 00:19:25.613 } 00:19:25.613 ] 00:19:25.613 }' 00:19:25.614 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.614 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.183 "name": "raid_bdev1", 00:19:26.183 "uuid": "983d4dda-5ced-4879-83d4-d11ce29cbac9", 00:19:26.183 "strip_size_kb": 0, 00:19:26.183 "state": "online", 00:19:26.183 "raid_level": "raid1", 00:19:26.183 "superblock": true, 00:19:26.183 "num_base_bdevs": 2, 00:19:26.183 "num_base_bdevs_discovered": 1, 00:19:26.183 "num_base_bdevs_operational": 1, 00:19:26.183 "base_bdevs_list": [ 00:19:26.183 { 00:19:26.183 "name": null, 00:19:26.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.183 "is_configured": false, 00:19:26.183 "data_offset": 0, 00:19:26.183 "data_size": 7936 00:19:26.183 }, 00:19:26.183 { 00:19:26.183 "name": "BaseBdev2", 00:19:26.183 "uuid": "f85b71fa-cbe5-5247-92e9-d1615ff5bd07", 00:19:26.183 "is_configured": true, 00:19:26.183 "data_offset": 256, 00:19:26.183 "data_size": 7936 00:19:26.183 } 00:19:26.183 ] 00:19:26.183 }' 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88757 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88757 ']' 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88757 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88757 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88757' 00:19:26.183 killing process with pid 88757 00:19:26.183 Received shutdown signal, test time was about 60.000000 seconds 00:19:26.183 00:19:26.183 Latency(us) 00:19:26.183 [2024-10-21T10:04:02.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.183 [2024-10-21T10:04:02.778Z] =================================================================================================================== 00:19:26.183 [2024-10-21T10:04:02.778Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88757 00:19:26.183 [2024-10-21 10:04:02.677377] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.183 10:04:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88757 00:19:26.183 [2024-10-21 10:04:02.677551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.183 [2024-10-21 10:04:02.677619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.183 [2024-10-21 10:04:02.677633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000005f00 name raid_bdev1, state offline 00:19:26.442 [2024-10-21 10:04:03.018014] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.824 10:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:27.824 00:19:27.824 real 0m17.986s 00:19:27.824 user 0m23.468s 00:19:27.824 sys 0m1.864s 00:19:27.824 10:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.824 10:04:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.824 ************************************ 00:19:27.824 END TEST raid_rebuild_test_sb_md_interleaved 00:19:27.824 ************************************ 00:19:27.824 10:04:04 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:27.824 10:04:04 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:27.824 10:04:04 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88757 ']' 00:19:27.824 10:04:04 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88757 00:19:27.824 10:04:04 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:27.824 00:19:27.824 real 12m20.869s 00:19:27.824 user 16m29.116s 00:19:27.824 sys 2m2.646s 00:19:27.824 10:04:04 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.824 ************************************ 00:19:27.824 10:04:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.824 END TEST bdev_raid 00:19:27.824 ************************************ 00:19:27.824 10:04:04 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:27.824 10:04:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:27.824 10:04:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.824 10:04:04 -- common/autotest_common.sh@10 -- # set +x 00:19:28.084 ************************************ 00:19:28.084 START TEST spdkcli_raid 00:19:28.084 ************************************ 00:19:28.084 10:04:04 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:28.084 * Looking for test storage... 00:19:28.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:28.084 10:04:04 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:28.084 10:04:04 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:28.084 10:04:04 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:28.084 10:04:04 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.084 10:04:04 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.085 10:04:04 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.085 10:04:04 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:28.085 10:04:04 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.085 10:04:04 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:28.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.085 --rc genhtml_branch_coverage=1 00:19:28.085 --rc genhtml_function_coverage=1 00:19:28.085 --rc genhtml_legend=1 00:19:28.085 --rc geninfo_all_blocks=1 00:19:28.085 --rc geninfo_unexecuted_blocks=1 00:19:28.085 00:19:28.085 ' 00:19:28.085 10:04:04 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:28.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.085 --rc genhtml_branch_coverage=1 00:19:28.085 --rc genhtml_function_coverage=1 00:19:28.085 --rc genhtml_legend=1 00:19:28.085 --rc geninfo_all_blocks=1 00:19:28.085 --rc geninfo_unexecuted_blocks=1 00:19:28.085 00:19:28.085 ' 00:19:28.085 10:04:04 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:28.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.085 --rc genhtml_branch_coverage=1 00:19:28.085 --rc genhtml_function_coverage=1 00:19:28.085 --rc genhtml_legend=1 00:19:28.085 --rc geninfo_all_blocks=1 00:19:28.085 --rc geninfo_unexecuted_blocks=1 00:19:28.085 00:19:28.085 ' 00:19:28.085 10:04:04 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:28.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.085 --rc genhtml_branch_coverage=1 00:19:28.085 --rc genhtml_function_coverage=1 00:19:28.085 --rc genhtml_legend=1 00:19:28.085 --rc geninfo_all_blocks=1 00:19:28.085 --rc geninfo_unexecuted_blocks=1 00:19:28.085 00:19:28.085 ' 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:28.085 10:04:04 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:28.085 10:04:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.085 10:04:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.085 10:04:04 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:28.410 10:04:04 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89438 00:19:28.410 10:04:04 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:28.410 10:04:04 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89438 00:19:28.410 10:04:04 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 89438 ']' 00:19:28.410 10:04:04 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.410 10:04:04 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.410 10:04:04 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.410 10:04:04 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.410 10:04:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.410 [2024-10-21 10:04:04.779420] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:28.410 [2024-10-21 10:04:04.779666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89438 ] 00:19:28.410 [2024-10-21 10:04:04.943851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.670 [2024-10-21 10:04:05.103407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.671 [2024-10-21 10:04:05.103448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.051 10:04:06 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.051 10:04:06 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:19:30.051 10:04:06 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:30.051 10:04:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:30.051 10:04:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.051 10:04:06 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:30.051 10:04:06 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:30.051 10:04:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.051 10:04:06 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:30.051 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:30.051 ' 00:19:31.434 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:31.434 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:31.434 10:04:07 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:31.434 10:04:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:31.434 10:04:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.434 10:04:08 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:31.434 10:04:08 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.434 10:04:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.434 10:04:08 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:31.434 ' 00:19:32.824 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:32.824 10:04:09 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:32.824 10:04:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.824 10:04:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.824 10:04:09 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:32.824 10:04:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.824 10:04:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.824 10:04:09 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:32.824 10:04:09 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:33.395 10:04:09 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:33.395 10:04:09 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:33.395 10:04:09 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:33.395 10:04:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:33.395 10:04:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.395 10:04:09 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:33.395 10:04:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.395 10:04:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.395 10:04:09 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:33.395 ' 00:19:34.333 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:34.593 10:04:11 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:34.593 10:04:11 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.593 10:04:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.593 10:04:11 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:34.593 10:04:11 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.593 10:04:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.593 10:04:11 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:34.593 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:34.593 ' 00:19:35.974 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:35.974 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:36.234 10:04:12 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:36.234 10:04:12 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89438 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89438 ']' 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89438 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89438 00:19:36.234 killing process with pid 89438 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89438' 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 89438 00:19:36.234 10:04:12 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 89438 00:19:39.535 10:04:15 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:39.535 10:04:15 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89438 ']' 00:19:39.535 10:04:15 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89438 00:19:39.535 10:04:15 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89438 ']' 00:19:39.535 Process with pid 89438 is not found 00:19:39.535 10:04:15 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89438 00:19:39.535 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89438) - No such process 00:19:39.535 10:04:15 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 89438 is not found' 00:19:39.535 10:04:15 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:39.535 10:04:15 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:39.535 10:04:15 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:39.535 10:04:15 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:39.535 ************************************ 00:19:39.535 END TEST spdkcli_raid 00:19:39.535 ************************************ 00:19:39.535 00:19:39.535 real 0m10.993s 00:19:39.535 user 0m22.544s 00:19:39.535 sys 0m1.338s 00:19:39.535 10:04:15 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.535 10:04:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.535 10:04:15 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:39.535 10:04:15 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:39.535 10:04:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.535 10:04:15 -- common/autotest_common.sh@10 -- # set +x 00:19:39.536 ************************************ 00:19:39.536 START TEST blockdev_raid5f 00:19:39.536 ************************************ 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:39.536 * Looking for test storage... 00:19:39.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.536 10:04:15 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:39.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.536 --rc genhtml_branch_coverage=1 00:19:39.536 --rc genhtml_function_coverage=1 00:19:39.536 --rc genhtml_legend=1 00:19:39.536 --rc geninfo_all_blocks=1 00:19:39.536 --rc geninfo_unexecuted_blocks=1 00:19:39.536 00:19:39.536 ' 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:39.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.536 --rc genhtml_branch_coverage=1 00:19:39.536 --rc genhtml_function_coverage=1 00:19:39.536 --rc genhtml_legend=1 00:19:39.536 --rc geninfo_all_blocks=1 00:19:39.536 --rc geninfo_unexecuted_blocks=1 00:19:39.536 00:19:39.536 ' 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:39.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.536 --rc genhtml_branch_coverage=1 00:19:39.536 --rc genhtml_function_coverage=1 00:19:39.536 --rc genhtml_legend=1 00:19:39.536 --rc geninfo_all_blocks=1 00:19:39.536 --rc geninfo_unexecuted_blocks=1 00:19:39.536 00:19:39.536 ' 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:39.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.536 --rc genhtml_branch_coverage=1 00:19:39.536 --rc genhtml_function_coverage=1 00:19:39.536 --rc genhtml_legend=1 00:19:39.536 --rc geninfo_all_blocks=1 00:19:39.536 --rc geninfo_unexecuted_blocks=1 00:19:39.536 00:19:39.536 ' 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89725 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89725 00:19:39.536 10:04:15 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 89725 ']' 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.536 10:04:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.536 [2024-10-21 10:04:15.830114] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:39.536 [2024-10-21 10:04:15.830361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89725 ] 00:19:39.536 [2024-10-21 10:04:16.000973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.797 [2024-10-21 10:04:16.153862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.736 10:04:17 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.736 10:04:17 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:19:40.736 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:40.736 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:40.736 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:40.737 10:04:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.737 10:04:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.737 Malloc0 00:19:40.996 Malloc1 00:19:40.996 Malloc2 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:40.996 10:04:17 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:40.996 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "bc31bc8b-9fd9-409f-9296-4bb1f605b185"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bc31bc8b-9fd9-409f-9296-4bb1f605b185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "bc31bc8b-9fd9-409f-9296-4bb1f605b185",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "be4a0bfc-7a8a-4229-b248-354874a6c3c9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "a6a67113-4e45-40a4-824a-403c73e52bd6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3aa92d39-2063-49c7-ab71-00c73d164904",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:41.256 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:41.256 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:41.256 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:41.256 10:04:17 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89725 00:19:41.256 10:04:17 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 89725 ']' 00:19:41.256 10:04:17 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 89725 00:19:41.256 10:04:17 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:19:41.256 10:04:17 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:41.256 10:04:17 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89725 00:19:41.256 killing process with pid 89725 00:19:41.256 10:04:17 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:41.256 10:04:17 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:41.256 10:04:17 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89725' 00:19:41.256 10:04:17 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 89725 00:19:41.256 10:04:17 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 89725 00:19:44.544 10:04:20 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:44.544 10:04:20 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:44.544 10:04:20 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:44.544 10:04:20 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:44.544 10:04:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.544 ************************************ 00:19:44.545 START TEST bdev_hello_world 00:19:44.545 ************************************ 00:19:44.545 10:04:20 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:44.545 [2024-10-21 10:04:20.804638] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:44.545 [2024-10-21 10:04:20.804908] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89798 ] 00:19:44.545 [2024-10-21 10:04:20.977721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.545 [2024-10-21 10:04:21.128683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.482 [2024-10-21 10:04:21.787138] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:45.482 [2024-10-21 10:04:21.787313] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:45.482 [2024-10-21 10:04:21.787341] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:45.482 [2024-10-21 10:04:21.787882] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:45.482 [2024-10-21 10:04:21.788039] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:45.482 [2024-10-21 10:04:21.788056] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:45.482 [2024-10-21 10:04:21.788109] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:45.482 00:19:45.482 [2024-10-21 10:04:21.788127] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:46.860 ************************************ 00:19:46.860 END TEST bdev_hello_world 00:19:46.860 ************************************ 00:19:46.860 00:19:46.860 real 0m2.704s 00:19:46.860 user 0m2.210s 00:19:46.860 sys 0m0.366s 00:19:46.860 10:04:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:46.860 10:04:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:47.120 10:04:23 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:47.120 10:04:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:47.120 10:04:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:47.120 10:04:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:47.120 ************************************ 00:19:47.120 START TEST bdev_bounds 00:19:47.120 ************************************ 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89846 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89846' 00:19:47.120 Process bdevio pid: 89846 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89846 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 89846 ']' 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.120 10:04:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:47.120 [2024-10-21 10:04:23.585214] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:47.120 [2024-10-21 10:04:23.585470] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89846 ] 00:19:47.380 [2024-10-21 10:04:23.741057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:47.380 [2024-10-21 10:04:23.892081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.380 [2024-10-21 10:04:23.892260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.380 [2024-10-21 10:04:23.892313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.318 10:04:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.318 10:04:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:48.318 10:04:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:48.318 I/O targets: 00:19:48.318 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:48.318 00:19:48.318 00:19:48.318 CUnit - A unit testing framework for C - Version 2.1-3 00:19:48.318 http://cunit.sourceforge.net/ 00:19:48.318 00:19:48.318 00:19:48.318 Suite: bdevio tests on: raid5f 00:19:48.318 Test: blockdev write read block ...passed 00:19:48.318 Test: blockdev write zeroes read block ...passed 00:19:48.318 Test: blockdev write zeroes read no split ...passed 00:19:48.318 Test: blockdev write zeroes read split ...passed 00:19:48.577 Test: blockdev write zeroes read split partial ...passed 00:19:48.577 Test: blockdev reset ...passed 00:19:48.577 Test: blockdev write read 8 blocks ...passed 00:19:48.577 Test: blockdev write read size > 128k ...passed 00:19:48.577 Test: blockdev write read invalid size ...passed 00:19:48.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:48.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:48.577 Test: blockdev write read max offset ...passed 00:19:48.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:48.577 Test: blockdev writev readv 8 blocks ...passed 00:19:48.577 Test: blockdev writev readv 30 x 1block ...passed 00:19:48.577 Test: blockdev writev readv block ...passed 00:19:48.577 Test: blockdev writev readv size > 128k ...passed 00:19:48.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:48.577 Test: blockdev comparev and writev ...passed 00:19:48.577 Test: blockdev nvme passthru rw ...passed 00:19:48.577 Test: blockdev nvme passthru vendor specific ...passed 00:19:48.577 Test: blockdev nvme admin passthru ...passed 00:19:48.577 Test: blockdev copy ...passed 00:19:48.577 00:19:48.577 Run Summary: Type Total Ran Passed Failed Inactive 00:19:48.577 suites 1 1 n/a 0 0 00:19:48.577 tests 23 23 23 0 0 00:19:48.577 asserts 130 130 130 0 n/a 00:19:48.577 00:19:48.577 Elapsed time = 0.742 seconds 00:19:48.577 0 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89846 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 89846 ']' 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 89846 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89846 00:19:48.577 killing process with pid 89846 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89846' 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 89846 00:19:48.577 10:04:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 89846 00:19:50.482 10:04:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:50.482 00:19:50.482 real 0m3.309s 00:19:50.482 user 0m8.242s 00:19:50.482 sys 0m0.498s 00:19:50.482 10:04:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.482 10:04:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:50.482 ************************************ 00:19:50.482 END TEST bdev_bounds 00:19:50.482 ************************************ 00:19:50.482 10:04:26 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:50.482 10:04:26 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:50.482 10:04:26 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.482 10:04:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:50.482 ************************************ 00:19:50.482 START TEST bdev_nbd 00:19:50.482 ************************************ 00:19:50.482 10:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:50.482 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:50.482 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:50.482 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:50.482 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:50.482 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:50.482 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:50.482 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89913 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89913 /var/tmp/spdk-nbd.sock 00:19:50.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 89913 ']' 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.483 10:04:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:50.483 [2024-10-21 10:04:26.975463] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:19:50.483 [2024-10-21 10:04:26.975717] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.742 [2024-10-21 10:04:27.148631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.742 [2024-10-21 10:04:27.305117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:51.680 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:51.940 1+0 records in 00:19:51.940 1+0 records out 00:19:51.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452069 s, 9.1 MB/s 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:51.940 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:52.198 { 00:19:52.198 "nbd_device": "/dev/nbd0", 00:19:52.198 "bdev_name": "raid5f" 00:19:52.198 } 00:19:52.198 ]' 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:52.198 { 00:19:52.198 "nbd_device": "/dev/nbd0", 00:19:52.198 "bdev_name": "raid5f" 00:19:52.198 } 00:19:52.198 ]' 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:52.198 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:52.456 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:52.457 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:52.457 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:52.457 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:52.457 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:52.457 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:52.457 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:52.457 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:52.457 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:52.457 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.457 10:04:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:52.457 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:52.457 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:52.457 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:52.715 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:52.716 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.716 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:52.716 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:52.716 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:52.716 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:52.716 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:52.716 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:52.716 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:52.716 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:52.974 /dev/nbd0 00:19:52.974 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:52.974 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:52.974 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:52.975 1+0 records in 00:19:52.975 1+0 records out 00:19:52.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367053 s, 11.2 MB/s 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:52.975 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:53.234 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:53.234 { 00:19:53.234 "nbd_device": "/dev/nbd0", 00:19:53.234 "bdev_name": "raid5f" 00:19:53.234 } 00:19:53.234 ]' 00:19:53.234 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:53.235 { 00:19:53.235 "nbd_device": "/dev/nbd0", 00:19:53.235 "bdev_name": "raid5f" 00:19:53.235 } 00:19:53.235 ]' 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:53.235 256+0 records in 00:19:53.235 256+0 records out 00:19:53.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131155 s, 79.9 MB/s 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:53.235 256+0 records in 00:19:53.235 256+0 records out 00:19:53.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328503 s, 31.9 MB/s 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:53.235 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:53.494 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:53.494 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:53.494 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:53.494 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:53.494 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:53.494 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:53.494 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:53.494 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:53.494 10:04:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:53.494 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:53.494 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:53.752 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:54.011 malloc_lvol_verify 00:19:54.011 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:54.270 10e16c60-e1e0-4c08-8e4f-9caa5f7e6ab7 00:19:54.270 10:04:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:54.528 31b0c8ec-bdbc-435f-8f28-991875db891b 00:19:54.528 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:54.789 /dev/nbd0 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:54.789 mke2fs 1.47.0 (5-Feb-2023) 00:19:54.789 Discarding device blocks: 0/4096 done 00:19:54.789 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:54.789 00:19:54.789 Allocating group tables: 0/1 done 00:19:54.789 Writing inode tables: 0/1 done 00:19:54.789 Creating journal (1024 blocks): done 00:19:54.789 Writing superblocks and filesystem accounting information: 0/1 done 00:19:54.789 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.789 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89913 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 89913 ']' 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 89913 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89913 00:19:55.063 killing process with pid 89913 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89913' 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 89913 00:19:55.063 10:04:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 89913 00:19:56.966 ************************************ 00:19:56.966 END TEST bdev_nbd 00:19:56.966 ************************************ 00:19:56.966 10:04:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:56.966 00:19:56.966 real 0m6.427s 00:19:56.966 user 0m8.593s 00:19:56.966 sys 0m1.512s 00:19:56.966 10:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:56.966 10:04:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:56.966 10:04:33 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:56.966 10:04:33 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:56.966 10:04:33 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:56.966 10:04:33 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:56.966 10:04:33 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:56.966 10:04:33 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.966 10:04:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:56.966 ************************************ 00:19:56.966 START TEST bdev_fio 00:19:56.966 ************************************ 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:56.966 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:56.966 ************************************ 00:19:56.966 START TEST bdev_fio_rw_verify 00:19:56.966 ************************************ 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.966 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:57.225 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:57.225 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:57.225 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:57.225 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:57.225 10:04:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:57.225 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:57.225 fio-3.35 00:19:57.225 Starting 1 thread 00:20:09.429 00:20:09.429 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90120: Mon Oct 21 10:04:44 2024 00:20:09.429 read: IOPS=9427, BW=36.8MiB/s (38.6MB/s)(368MiB/10001msec) 00:20:09.429 slat (nsec): min=19944, max=79437, avg=25575.58, stdev=2441.74 00:20:09.429 clat (usec): min=12, max=462, avg=167.77, stdev=60.91 00:20:09.429 lat (usec): min=37, max=487, avg=193.35, stdev=61.26 00:20:09.429 clat percentiles (usec): 00:20:09.429 | 50.000th=[ 172], 99.000th=[ 285], 99.900th=[ 318], 99.990th=[ 396], 00:20:09.429 | 99.999th=[ 461] 00:20:09.429 write: IOPS=9882, BW=38.6MiB/s (40.5MB/s)(382MiB/9883msec); 0 zone resets 00:20:09.429 slat (usec): min=9, max=352, avg=21.42, stdev= 4.80 00:20:09.429 clat (usec): min=73, max=1077, avg=391.09, stdev=51.36 00:20:09.429 lat (usec): min=94, max=1266, avg=412.51, stdev=52.39 00:20:09.429 clat percentiles (usec): 00:20:09.429 | 50.000th=[ 396], 99.000th=[ 498], 99.900th=[ 627], 99.990th=[ 971], 00:20:09.429 | 99.999th=[ 1074] 00:20:09.429 bw ( KiB/s): min=36120, max=41624, per=98.90%, avg=39096.26, stdev=1741.29, samples=19 00:20:09.429 iops : min= 9030, max=10406, avg=9774.05, stdev=435.30, samples=19 00:20:09.429 lat (usec) : 20=0.01%, 50=0.01%, 100=9.89%, 250=34.03%, 500=55.65% 00:20:09.429 lat (usec) : 750=0.40%, 1000=0.02% 00:20:09.429 lat (msec) : 2=0.01% 00:20:09.429 cpu : usr=98.82%, sys=0.50%, ctx=20, majf=0, minf=8044 00:20:09.429 IO depths : 1=7.6%, 2=19.9%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.429 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.429 issued rwts: total=94282,97669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:09.429 00:20:09.429 Run status group 0 (all jobs): 00:20:09.429 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=368MiB (386MB), run=10001-10001msec 00:20:09.429 WRITE: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=382MiB (400MB), run=9883-9883msec 00:20:10.364 ----------------------------------------------------- 00:20:10.364 Suppressions used: 00:20:10.364 count bytes template 00:20:10.364 1 7 /usr/src/fio/parse.c 00:20:10.364 459 44064 /usr/src/fio/iolog.c 00:20:10.364 1 8 libtcmalloc_minimal.so 00:20:10.364 1 904 libcrypto.so 00:20:10.364 ----------------------------------------------------- 00:20:10.364 00:20:10.364 00:20:10.364 real 0m13.228s 00:20:10.364 user 0m13.388s 00:20:10.364 sys 0m0.765s 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:10.364 ************************************ 00:20:10.364 END TEST bdev_fio_rw_verify 00:20:10.364 ************************************ 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "bc31bc8b-9fd9-409f-9296-4bb1f605b185"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bc31bc8b-9fd9-409f-9296-4bb1f605b185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "bc31bc8b-9fd9-409f-9296-4bb1f605b185",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "be4a0bfc-7a8a-4229-b248-354874a6c3c9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "a6a67113-4e45-40a4-824a-403c73e52bd6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3aa92d39-2063-49c7-ab71-00c73d164904",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:10.364 /home/vagrant/spdk_repo/spdk 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:10.364 ************************************ 00:20:10.364 END TEST bdev_fio 00:20:10.364 ************************************ 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:10.364 00:20:10.364 real 0m13.522s 00:20:10.364 user 0m13.513s 00:20:10.364 sys 0m0.904s 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:10.364 10:04:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:10.364 10:04:46 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:10.364 10:04:46 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:10.364 10:04:46 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:10.364 10:04:46 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:10.364 10:04:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.364 ************************************ 00:20:10.364 START TEST bdev_verify 00:20:10.364 ************************************ 00:20:10.364 10:04:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:10.623 [2024-10-21 10:04:47.042720] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:10.623 [2024-10-21 10:04:47.042856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90286 ] 00:20:10.623 [2024-10-21 10:04:47.212431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:10.882 [2024-10-21 10:04:47.367608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.882 [2024-10-21 10:04:47.367672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.449 Running I/O for 5 seconds... 00:20:13.767 12644.00 IOPS, 49.39 MiB/s [2024-10-21T10:04:51.299Z] 13610.50 IOPS, 53.17 MiB/s [2024-10-21T10:04:52.239Z] 13367.67 IOPS, 52.22 MiB/s [2024-10-21T10:04:53.177Z] 13472.25 IOPS, 52.63 MiB/s [2024-10-21T10:04:53.177Z] 13514.00 IOPS, 52.79 MiB/s 00:20:16.582 Latency(us) 00:20:16.582 [2024-10-21T10:04:53.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.582 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:16.582 Verification LBA range: start 0x0 length 0x2000 00:20:16.582 raid5f : 5.02 6729.65 26.29 0.00 0.00 28563.00 377.40 24840.72 00:20:16.582 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:16.582 Verification LBA range: start 0x2000 length 0x2000 00:20:16.582 raid5f : 5.02 6771.93 26.45 0.00 0.00 28458.60 108.66 24955.19 00:20:16.582 [2024-10-21T10:04:53.177Z] =================================================================================================================== 00:20:16.582 [2024-10-21T10:04:53.177Z] Total : 13501.58 52.74 0.00 0.00 28510.63 108.66 24955.19 00:20:18.488 00:20:18.488 real 0m7.774s 00:20:18.488 user 0m14.198s 00:20:18.488 sys 0m0.368s 00:20:18.488 ************************************ 00:20:18.488 END TEST bdev_verify 00:20:18.488 ************************************ 00:20:18.488 10:04:54 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:18.488 10:04:54 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:18.488 10:04:54 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:18.488 10:04:54 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:18.488 10:04:54 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:18.488 10:04:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:18.488 ************************************ 00:20:18.488 START TEST bdev_verify_big_io 00:20:18.488 ************************************ 00:20:18.488 10:04:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:18.488 [2024-10-21 10:04:54.885320] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:18.489 [2024-10-21 10:04:54.885577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90391 ] 00:20:18.489 [2024-10-21 10:04:55.054734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:18.748 [2024-10-21 10:04:55.212370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.748 [2024-10-21 10:04:55.212411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.684 Running I/O for 5 seconds... 00:20:21.561 568.00 IOPS, 35.50 MiB/s [2024-10-21T10:04:59.091Z] 727.50 IOPS, 45.47 MiB/s [2024-10-21T10:05:00.467Z] 761.33 IOPS, 47.58 MiB/s [2024-10-21T10:05:01.404Z] 777.00 IOPS, 48.56 MiB/s [2024-10-21T10:05:01.404Z] 812.60 IOPS, 50.79 MiB/s 00:20:24.809 Latency(us) 00:20:24.809 [2024-10-21T10:05:01.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.809 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:24.809 Verification LBA range: start 0x0 length 0x200 00:20:24.809 raid5f : 5.11 397.92 24.87 0.00 0.00 7959345.31 250.41 353493.74 00:20:24.809 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:24.809 Verification LBA range: start 0x200 length 0x200 00:20:24.809 raid5f : 5.29 407.81 25.49 0.00 0.00 7762048.92 171.71 351662.17 00:20:24.809 [2024-10-21T10:05:01.404Z] =================================================================================================================== 00:20:24.809 [2024-10-21T10:05:01.404Z] Total : 805.72 50.36 0.00 0.00 7857753.45 171.71 353493.74 00:20:26.711 00:20:26.711 real 0m8.115s 00:20:26.711 user 0m14.888s 00:20:26.711 sys 0m0.386s 00:20:26.711 ************************************ 00:20:26.711 END TEST bdev_verify_big_io 00:20:26.711 ************************************ 00:20:26.711 10:05:02 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:26.711 10:05:02 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:26.711 10:05:02 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:26.711 10:05:02 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:26.711 10:05:02 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:26.711 10:05:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:26.711 ************************************ 00:20:26.711 START TEST bdev_write_zeroes 00:20:26.711 ************************************ 00:20:26.711 10:05:02 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:26.711 [2024-10-21 10:05:03.078132] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:26.711 [2024-10-21 10:05:03.079182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90497 ] 00:20:26.711 [2024-10-21 10:05:03.268955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.970 [2024-10-21 10:05:03.439386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.908 Running I/O for 1 seconds... 00:20:28.847 20319.00 IOPS, 79.37 MiB/s 00:20:28.847 Latency(us) 00:20:28.847 [2024-10-21T10:05:05.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.847 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:28.847 raid5f : 1.01 20286.98 79.25 0.00 0.00 6284.29 1903.12 8642.74 00:20:28.847 [2024-10-21T10:05:05.442Z] =================================================================================================================== 00:20:28.847 [2024-10-21T10:05:05.442Z] Total : 20286.98 79.25 0.00 0.00 6284.29 1903.12 8642.74 00:20:30.754 00:20:30.754 real 0m4.040s 00:20:30.754 user 0m3.516s 00:20:30.754 sys 0m0.385s 00:20:30.754 10:05:07 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.754 10:05:07 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:30.754 ************************************ 00:20:30.754 END TEST bdev_write_zeroes 00:20:30.754 ************************************ 00:20:30.754 10:05:07 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.754 10:05:07 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:30.754 10:05:07 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.754 10:05:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:30.754 ************************************ 00:20:30.754 START TEST bdev_json_nonenclosed 00:20:30.754 ************************************ 00:20:30.754 10:05:07 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.754 [2024-10-21 10:05:07.186546] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:30.754 [2024-10-21 10:05:07.186732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90556 ] 00:20:31.013 [2024-10-21 10:05:07.360459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.013 [2024-10-21 10:05:07.519935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.013 [2024-10-21 10:05:07.520069] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:31.013 [2024-10-21 10:05:07.520092] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:31.013 [2024-10-21 10:05:07.520104] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:31.272 00:20:31.272 real 0m0.780s 00:20:31.272 user 0m0.512s 00:20:31.272 sys 0m0.162s 00:20:31.272 10:05:07 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:31.272 10:05:07 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:31.272 ************************************ 00:20:31.272 END TEST bdev_json_nonenclosed 00:20:31.272 ************************************ 00:20:31.531 10:05:07 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:31.531 10:05:07 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:31.531 10:05:07 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:31.531 10:05:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:31.531 ************************************ 00:20:31.531 START TEST bdev_json_nonarray 00:20:31.531 ************************************ 00:20:31.531 10:05:07 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:31.531 [2024-10-21 10:05:08.021406] Starting SPDK v25.01-pre git sha1 1042d663d / DPDK 23.11.0 initialization... 00:20:31.531 [2024-10-21 10:05:08.021546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90581 ] 00:20:31.790 [2024-10-21 10:05:08.190519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.790 [2024-10-21 10:05:08.351010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.791 [2024-10-21 10:05:08.351191] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:31.791 [2024-10-21 10:05:08.351237] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:31.791 [2024-10-21 10:05:08.351253] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:32.359 00:20:32.359 real 0m0.763s 00:20:32.359 user 0m0.512s 00:20:32.359 sys 0m0.145s 00:20:32.359 10:05:08 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:32.359 10:05:08 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:32.359 ************************************ 00:20:32.359 END TEST bdev_json_nonarray 00:20:32.359 ************************************ 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:32.359 10:05:08 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:32.359 00:20:32.359 real 0m53.284s 00:20:32.359 user 1m11.288s 00:20:32.359 sys 0m6.038s 00:20:32.359 10:05:08 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:32.359 10:05:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:32.359 ************************************ 00:20:32.359 END TEST blockdev_raid5f 00:20:32.359 ************************************ 00:20:32.359 10:05:08 -- spdk/autotest.sh@194 -- # uname -s 00:20:32.359 10:05:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:32.359 10:05:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:32.359 10:05:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:32.359 10:05:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@256 -- # timing_exit lib 00:20:32.359 10:05:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.359 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:20:32.359 10:05:08 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:32.359 10:05:08 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:20:32.359 10:05:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:32.359 10:05:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:32.359 10:05:08 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:20:32.359 10:05:08 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:20:32.359 10:05:08 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:20:32.359 10:05:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.359 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:20:32.359 10:05:08 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:20:32.359 10:05:08 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:32.359 10:05:08 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:32.359 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:20:34.897 INFO: APP EXITING 00:20:34.898 INFO: killing all VMs 00:20:34.898 INFO: killing vhost app 00:20:34.898 INFO: EXIT DONE 00:20:35.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:35.157 Waiting for block devices as requested 00:20:35.157 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:35.157 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:36.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:36.358 Cleaning 00:20:36.358 Removing: /var/run/dpdk/spdk0/config 00:20:36.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:36.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:36.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:36.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:36.358 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:36.358 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:36.358 Removing: /dev/shm/spdk_tgt_trace.pid56432 00:20:36.358 Removing: /var/run/dpdk/spdk0 00:20:36.358 Removing: /var/run/dpdk/spdk_pid56186 00:20:36.358 Removing: /var/run/dpdk/spdk_pid56432 00:20:36.358 Removing: /var/run/dpdk/spdk_pid56667 00:20:36.358 Removing: /var/run/dpdk/spdk_pid56771 00:20:36.358 Removing: /var/run/dpdk/spdk_pid56830 00:20:36.358 Removing: /var/run/dpdk/spdk_pid56966 00:20:36.358 Removing: /var/run/dpdk/spdk_pid56990 00:20:36.358 Removing: /var/run/dpdk/spdk_pid57200 00:20:36.358 Removing: /var/run/dpdk/spdk_pid57311 00:20:36.358 Removing: /var/run/dpdk/spdk_pid57424 00:20:36.358 Removing: /var/run/dpdk/spdk_pid57551 00:20:36.358 Removing: /var/run/dpdk/spdk_pid57665 00:20:36.358 Removing: /var/run/dpdk/spdk_pid57710 00:20:36.358 Removing: /var/run/dpdk/spdk_pid57741 00:20:36.358 Removing: /var/run/dpdk/spdk_pid57817 00:20:36.358 Removing: /var/run/dpdk/spdk_pid57945 00:20:36.358 Removing: /var/run/dpdk/spdk_pid58387 00:20:36.358 Removing: /var/run/dpdk/spdk_pid58462 00:20:36.358 Removing: /var/run/dpdk/spdk_pid58541 00:20:36.358 Removing: /var/run/dpdk/spdk_pid58563 00:20:36.358 Removing: /var/run/dpdk/spdk_pid58714 00:20:36.358 Removing: /var/run/dpdk/spdk_pid58741 00:20:36.358 Removing: /var/run/dpdk/spdk_pid58891 00:20:36.358 Removing: /var/run/dpdk/spdk_pid58913 00:20:36.358 Removing: /var/run/dpdk/spdk_pid58982 00:20:36.358 Removing: /var/run/dpdk/spdk_pid59006 00:20:36.358 Removing: /var/run/dpdk/spdk_pid59070 00:20:36.358 Removing: /var/run/dpdk/spdk_pid59093 00:20:36.358 Removing: /var/run/dpdk/spdk_pid59294 00:20:36.358 Removing: /var/run/dpdk/spdk_pid59325 00:20:36.358 Removing: /var/run/dpdk/spdk_pid59414 00:20:36.358 Removing: /var/run/dpdk/spdk_pid60787 00:20:36.358 Removing: /var/run/dpdk/spdk_pid60993 00:20:36.358 Removing: /var/run/dpdk/spdk_pid61133 00:20:36.358 Removing: /var/run/dpdk/spdk_pid61782 00:20:36.358 Removing: /var/run/dpdk/spdk_pid61988 00:20:36.358 Removing: /var/run/dpdk/spdk_pid62139 00:20:36.358 Removing: /var/run/dpdk/spdk_pid62777 00:20:36.358 Removing: /var/run/dpdk/spdk_pid63107 00:20:36.358 Removing: /var/run/dpdk/spdk_pid63252 00:20:36.358 Removing: /var/run/dpdk/spdk_pid64644 00:20:36.358 Removing: /var/run/dpdk/spdk_pid64902 00:20:36.358 Removing: /var/run/dpdk/spdk_pid65043 00:20:36.358 Removing: /var/run/dpdk/spdk_pid66428 00:20:36.358 Removing: /var/run/dpdk/spdk_pid66687 00:20:36.630 Removing: /var/run/dpdk/spdk_pid66832 00:20:36.630 Removing: /var/run/dpdk/spdk_pid68220 00:20:36.630 Removing: /var/run/dpdk/spdk_pid68665 00:20:36.630 Removing: /var/run/dpdk/spdk_pid68811 00:20:36.630 Removing: /var/run/dpdk/spdk_pid70296 00:20:36.630 Removing: /var/run/dpdk/spdk_pid70561 00:20:36.630 Removing: /var/run/dpdk/spdk_pid70707 00:20:36.630 Removing: /var/run/dpdk/spdk_pid72208 00:20:36.630 Removing: /var/run/dpdk/spdk_pid72473 00:20:36.630 Removing: /var/run/dpdk/spdk_pid72624 00:20:36.630 Removing: /var/run/dpdk/spdk_pid74126 00:20:36.630 Removing: /var/run/dpdk/spdk_pid74624 00:20:36.630 Removing: /var/run/dpdk/spdk_pid74776 00:20:36.630 Removing: /var/run/dpdk/spdk_pid74925 00:20:36.630 Removing: /var/run/dpdk/spdk_pid75354 00:20:36.630 Removing: /var/run/dpdk/spdk_pid76096 00:20:36.630 Removing: /var/run/dpdk/spdk_pid76473 00:20:36.630 Removing: /var/run/dpdk/spdk_pid77182 00:20:36.630 Removing: /var/run/dpdk/spdk_pid77629 00:20:36.630 Removing: /var/run/dpdk/spdk_pid78388 00:20:36.630 Removing: /var/run/dpdk/spdk_pid78797 00:20:36.630 Removing: /var/run/dpdk/spdk_pid80781 00:20:36.630 Removing: /var/run/dpdk/spdk_pid81225 00:20:36.630 Removing: /var/run/dpdk/spdk_pid81667 00:20:36.630 Removing: /var/run/dpdk/spdk_pid83767 00:20:36.630 Removing: /var/run/dpdk/spdk_pid84253 00:20:36.630 Removing: /var/run/dpdk/spdk_pid84781 00:20:36.630 Removing: /var/run/dpdk/spdk_pid85860 00:20:36.630 Removing: /var/run/dpdk/spdk_pid86183 00:20:36.630 Removing: /var/run/dpdk/spdk_pid87140 00:20:36.630 Removing: /var/run/dpdk/spdk_pid87475 00:20:36.630 Removing: /var/run/dpdk/spdk_pid88429 00:20:36.630 Removing: /var/run/dpdk/spdk_pid88757 00:20:36.630 Removing: /var/run/dpdk/spdk_pid89438 00:20:36.630 Removing: /var/run/dpdk/spdk_pid89725 00:20:36.630 Removing: /var/run/dpdk/spdk_pid89798 00:20:36.630 Removing: /var/run/dpdk/spdk_pid89846 00:20:36.630 Removing: /var/run/dpdk/spdk_pid90105 00:20:36.630 Removing: /var/run/dpdk/spdk_pid90286 00:20:36.630 Removing: /var/run/dpdk/spdk_pid90391 00:20:36.630 Removing: /var/run/dpdk/spdk_pid90497 00:20:36.630 Removing: /var/run/dpdk/spdk_pid90556 00:20:36.630 Removing: /var/run/dpdk/spdk_pid90581 00:20:36.630 Clean 00:20:36.630 10:05:13 -- common/autotest_common.sh@1451 -- # return 0 00:20:36.630 10:05:13 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:20:36.630 10:05:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:36.630 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:20:36.889 10:05:13 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:20:36.889 10:05:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:36.889 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:20:36.889 10:05:13 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:36.889 10:05:13 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:36.889 10:05:13 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:36.889 10:05:13 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:20:36.889 10:05:13 -- spdk/autotest.sh@394 -- # hostname 00:20:36.889 10:05:13 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:37.148 geninfo: WARNING: invalid characters removed from testname! 00:21:03.708 10:05:36 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:03.708 10:05:39 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:05.089 10:05:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:07.630 10:05:43 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:09.641 10:05:45 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:12.179 10:05:48 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:14.089 10:05:50 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:14.089 10:05:50 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:21:14.089 10:05:50 -- common/autotest_common.sh@1691 -- $ lcov --version 00:21:14.089 10:05:50 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:21:14.089 10:05:50 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:21:14.089 10:05:50 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:21:14.089 10:05:50 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:21:14.089 10:05:50 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:21:14.089 10:05:50 -- scripts/common.sh@336 -- $ IFS=.-: 00:21:14.089 10:05:50 -- scripts/common.sh@336 -- $ read -ra ver1 00:21:14.089 10:05:50 -- scripts/common.sh@337 -- $ IFS=.-: 00:21:14.089 10:05:50 -- scripts/common.sh@337 -- $ read -ra ver2 00:21:14.089 10:05:50 -- scripts/common.sh@338 -- $ local 'op=<' 00:21:14.089 10:05:50 -- scripts/common.sh@340 -- $ ver1_l=2 00:21:14.089 10:05:50 -- scripts/common.sh@341 -- $ ver2_l=1 00:21:14.089 10:05:50 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:21:14.089 10:05:50 -- scripts/common.sh@344 -- $ case "$op" in 00:21:14.089 10:05:50 -- scripts/common.sh@345 -- $ : 1 00:21:14.089 10:05:50 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:21:14.089 10:05:50 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.089 10:05:50 -- scripts/common.sh@365 -- $ decimal 1 00:21:14.089 10:05:50 -- scripts/common.sh@353 -- $ local d=1 00:21:14.089 10:05:50 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:21:14.089 10:05:50 -- scripts/common.sh@355 -- $ echo 1 00:21:14.089 10:05:50 -- scripts/common.sh@365 -- $ ver1[v]=1 00:21:14.089 10:05:50 -- scripts/common.sh@366 -- $ decimal 2 00:21:14.089 10:05:50 -- scripts/common.sh@353 -- $ local d=2 00:21:14.089 10:05:50 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:21:14.089 10:05:50 -- scripts/common.sh@355 -- $ echo 2 00:21:14.089 10:05:50 -- scripts/common.sh@366 -- $ ver2[v]=2 00:21:14.089 10:05:50 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:21:14.089 10:05:50 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:21:14.089 10:05:50 -- scripts/common.sh@368 -- $ return 0 00:21:14.089 10:05:50 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.089 10:05:50 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:21:14.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.089 --rc genhtml_branch_coverage=1 00:21:14.089 --rc genhtml_function_coverage=1 00:21:14.089 --rc genhtml_legend=1 00:21:14.089 --rc geninfo_all_blocks=1 00:21:14.089 --rc geninfo_unexecuted_blocks=1 00:21:14.089 00:21:14.089 ' 00:21:14.089 10:05:50 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:21:14.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.089 --rc genhtml_branch_coverage=1 00:21:14.089 --rc genhtml_function_coverage=1 00:21:14.089 --rc genhtml_legend=1 00:21:14.089 --rc geninfo_all_blocks=1 00:21:14.089 --rc geninfo_unexecuted_blocks=1 00:21:14.089 00:21:14.089 ' 00:21:14.089 10:05:50 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:21:14.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.089 --rc genhtml_branch_coverage=1 00:21:14.089 --rc genhtml_function_coverage=1 00:21:14.089 --rc genhtml_legend=1 00:21:14.089 --rc geninfo_all_blocks=1 00:21:14.089 --rc geninfo_unexecuted_blocks=1 00:21:14.089 00:21:14.089 ' 00:21:14.089 10:05:50 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:21:14.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.089 --rc genhtml_branch_coverage=1 00:21:14.089 --rc genhtml_function_coverage=1 00:21:14.089 --rc genhtml_legend=1 00:21:14.089 --rc geninfo_all_blocks=1 00:21:14.089 --rc geninfo_unexecuted_blocks=1 00:21:14.089 00:21:14.089 ' 00:21:14.089 10:05:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:14.089 10:05:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:21:14.089 10:05:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:14.089 10:05:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.089 10:05:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.089 10:05:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.089 10:05:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.089 10:05:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.089 10:05:50 -- paths/export.sh@5 -- $ export PATH 00:21:14.089 10:05:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.350 10:05:50 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:14.350 10:05:50 -- common/autobuild_common.sh@486 -- $ date +%s 00:21:14.350 10:05:50 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729505150.XXXXXX 00:21:14.350 10:05:50 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729505150.iuV8PT 00:21:14.350 10:05:50 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:21:14.350 10:05:50 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:21:14.350 10:05:50 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:21:14.350 10:05:50 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:14.350 10:05:50 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:14.350 10:05:50 -- common/autobuild_common.sh@502 -- $ get_config_params 00:21:14.350 10:05:50 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:21:14.350 10:05:50 -- common/autotest_common.sh@10 -- $ set +x 00:21:14.350 10:05:50 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:21:14.350 10:05:50 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:21:14.350 10:05:50 -- pm/common@17 -- $ local monitor 00:21:14.350 10:05:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:14.350 10:05:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:14.350 10:05:50 -- pm/common@25 -- $ sleep 1 00:21:14.350 10:05:50 -- pm/common@21 -- $ date +%s 00:21:14.350 10:05:50 -- pm/common@21 -- $ date +%s 00:21:14.350 10:05:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729505150 00:21:14.350 10:05:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729505150 00:21:14.350 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729505150_collect-vmstat.pm.log 00:21:14.350 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729505150_collect-cpu-load.pm.log 00:21:15.288 10:05:51 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:21:15.288 10:05:51 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:21:15.288 10:05:51 -- spdk/autopackage.sh@14 -- $ timing_finish 00:21:15.288 10:05:51 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:15.288 10:05:51 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:15.288 10:05:51 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:15.288 10:05:51 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:21:15.288 10:05:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:21:15.288 10:05:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:21:15.288 10:05:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:15.288 10:05:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:21:15.288 10:05:51 -- pm/common@44 -- $ pid=92098 00:21:15.288 10:05:51 -- pm/common@50 -- $ kill -TERM 92098 00:21:15.288 10:05:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:15.288 10:05:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:15.288 10:05:51 -- pm/common@44 -- $ pid=92100 00:21:15.288 10:05:51 -- pm/common@50 -- $ kill -TERM 92100 00:21:15.288 + [[ -n 5420 ]] 00:21:15.288 + sudo kill 5420 00:21:15.298 [Pipeline] } 00:21:15.314 [Pipeline] // timeout 00:21:15.320 [Pipeline] } 00:21:15.335 [Pipeline] // stage 00:21:15.340 [Pipeline] } 00:21:15.355 [Pipeline] // catchError 00:21:15.365 [Pipeline] stage 00:21:15.367 [Pipeline] { (Stop VM) 00:21:15.380 [Pipeline] sh 00:21:15.663 + vagrant halt 00:21:18.955 ==> default: Halting domain... 00:21:27.083 [Pipeline] sh 00:21:27.365 + vagrant destroy -f 00:21:30.653 ==> default: Removing domain... 00:21:30.664 [Pipeline] sh 00:21:30.942 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:30.951 [Pipeline] } 00:21:30.966 [Pipeline] // stage 00:21:30.971 [Pipeline] } 00:21:30.985 [Pipeline] // dir 00:21:30.991 [Pipeline] } 00:21:31.005 [Pipeline] // wrap 00:21:31.011 [Pipeline] } 00:21:31.024 [Pipeline] // catchError 00:21:31.033 [Pipeline] stage 00:21:31.035 [Pipeline] { (Epilogue) 00:21:31.048 [Pipeline] sh 00:21:31.330 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:37.923 [Pipeline] catchError 00:21:37.925 [Pipeline] { 00:21:37.938 [Pipeline] sh 00:21:38.215 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:38.215 Artifacts sizes are good 00:21:38.223 [Pipeline] } 00:21:38.237 [Pipeline] // catchError 00:21:38.248 [Pipeline] archiveArtifacts 00:21:38.255 Archiving artifacts 00:21:38.351 [Pipeline] cleanWs 00:21:38.363 [WS-CLEANUP] Deleting project workspace... 00:21:38.365 [WS-CLEANUP] Deferred wipeout is used... 00:21:38.374 [WS-CLEANUP] done 00:21:38.384 [Pipeline] } 00:21:38.400 [Pipeline] // stage 00:21:38.405 [Pipeline] } 00:21:38.419 [Pipeline] // node 00:21:38.425 [Pipeline] End of Pipeline 00:21:38.466 Finished: SUCCESS